id
stringlengths 40
40
| pid
stringlengths 42
42
| input
stringlengths 8.37k
169k
| output
stringlengths 1
1.63k
|
---|---|---|---|
dad8cc543a87534751f9f9e308787e1af06f0627 | dad8cc543a87534751f9f9e308787e1af06f0627_1 | Q: What datasets used for evaluation?
Text: Introduction
Entity Linking (EL), which is also called Entity Disambiguation (ED), is the task of mapping mentions in text to corresponding entities in a given knowledge Base (KB). This task is an important and challenging stage in text understanding because mentions are usually ambiguous, i.e., different named entities may share the same surface form and the same entity may have multiple aliases. EL is key for information retrieval (IE) and has many applications, such as knowledge base population (KBP), question answering (QA), etc.
Existing EL methods can be divided into two categories: local model and global model. Local models concern mainly on contextual words surrounding the mentions, where mentions are disambiguated independently. These methods are not work well when the context information is not rich enough. Global models take into account the topical coherence among the referred entities within the same document, where mentions are disambiguated jointly. Most of previous global models BIBREF0 , BIBREF1 , BIBREF2 calculate the pairwise scores between all candidate entities and select the most relevant group of entities. However, the consistency among wrong entities as well as that among right ones are involved, which not only increases the model complexity but also introduces some noises. For example, in Figure 1, there are three mentions "France", "Croatia" and "2018 World Cup", and each mention has three candidate entities. Here, "France" may refer to French Republic, France national basketball team or France national football team in KB. It is difficult to disambiguate using local models, due to the scarce common information in the contextual words of "France" and the descriptions of its candidate entities. Besides, the topical coherence among the wrong entities related to basketball team (linked by an orange dashed line) may make the global models mistakenly refer "France" to France national basketball team. So, how to solve these problems?
We note that, mentions in text usually have different disambiguation difficulty according to the quality of contextual information and the topical coherence. Intuitively, if we start with mentions that are easier to disambiguate and gain correct results, it will be effective to utilize information provided by previously referred entities to disambiguate subsequent mentions. In the above example, it is much easier to map "2018 World Cup" to 2018 FIFA World Cup based on their common contextual words "France", "Croatia", "4-2". Then, it is obvious that "France" and "Croatia" should be referred to the national football team because football-related terms are mentioned many times in the description of 2018 FIFA World Cup.
Inspired by this intuition, we design the solution with three principles: (i) utilizing local features to rank the mentions in text and deal with them in a sequence manner; (ii) utilizing the information of previously referred entities for the subsequent entity disambiguation; (iii) making decisions from a global perspective to avoid the error propagation if the previous decision is wrong.
In order to achieve these aims, we consider global EL as a sequence decision problem and proposed a deep reinforcement learning (RL) based model, RLEL for short, which consists of three modules: Local Encoder, Global Encoder and Entity Selector. For each mention and its candidate entities, Local Encoder encodes the local features to obtain their latent vector representations. Then, the mentions are ranked according to their disambiguation difficulty, which is measured by the learned vector representations. In order to enforce global coherence between mentions, Global Encoder encodes the local representations of mention-entity pairs in a sequential manner via a LSTM network, which maintains a long-term memory on features of entities which has been selected in previous states. Entity Selector uses a policy network to choose the target entities from the candidate set. For a single disambiguation decision, the policy network not only considers the pairs of current mention-entity representations, but also concerns the features of referred entities in the previous states which is pursued by the Global Encoder. In this way, Entity Selector is able to take actions based on the current state and previous ones. When eliminating the ambiguity of all mentions in the sequence, delayed rewards are used to adjust its policy in order to gain an optimized global decision.
Deep RL model, which learns to directly optimize the overall evaluation metrics, works much better than models which learn with loss functions that just evaluate a particular single decision. By this property, RL has been successfully used in many NLP tasks, such as information retrieval BIBREF3 , dialogue system BIBREF4 and relation classification BIBREF5 , etc. To the best of our knowledge, we are the first to design a RL model for global entity linking. And in this paper, our RL model is able to produce more accurate results by exploring the long-term influence of independent decisions and encoding the entities disambiguated in previous states.
In summary, the main contributions of our paper mainly include following aspects:
Methodology
The overall structure of our RLEL model is shown in Figure 2. The proposed framework mainly includes three parts: Local Encoder which encodes local features of mentions and their candidate entities, Global Encoder which encodes the global coherence of mentions in a sequence manner and Entity Selector which selects an entity from the candidate set. As the Entity Selector and the Global Encoder are correlated mutually, we train them jointly. Moreover, the Local Encoder as the basis of the entire framework will be independently trained before the joint training process starts. In the following, we will introduce the technical details of these modules.
Preliminaries
Before introducing our model, we firstly define the entity linking task. Formally, given a document $D$ with a set of mentions $M = \lbrace m_1, m_2,...,m_k\rbrace $ , each mention $ m_t \in D$ has a set of candidate entities $C_{m_t} = \lbrace e_{t}^1, e_{t}^2,..., e_{t}^n\rbrace $ . The task of entity linking is to map each mention $m_t$ to its corresponding correct target entity $e_{t}^+$ or return "NIL" if there is not correct target entity in the knowledge base. Before selecting the target entity, we need to generate a certain number of candidate entities for model selection.
Inspired by the previous works BIBREF6 , BIBREF7 , BIBREF8 , we use the mention's redirect and disambiguation pages in Wikipedia to generate candidate sets. For those mentions without corresponding disambiguation pages, we use its n-grams to retrieve the candidates BIBREF8 . In most cases, the disambiguation page contains many entities, sometimes even hundreds. To optimize the model's memory and avoid unnecessary calculations, the candidate sets need to be filtered BIBREF9 , BIBREF0 , BIBREF1 . Here we utilize the XGBoost model BIBREF10 as an entity ranker to reduce the size of candidate set. The features used in XGBoost can be divided into two aspects, the one is string similarity like the Jaro-Winkler distance between the entity title and the mention, the other is semantic similarity like the cosine distance between the mention context representation and the entity embedding. Furthermore, we also use the statistical features based on the pageview and hyperlinks in Wikipedia. Empirically, we get the pageview of the entity from the Wikipedia Tool Labs which counts the number of visits on each entity page in Wikipedia. After ranking the candidate sets based on the above features, we take the top k scored entities as final candidate set for each mention.
Local Encoder
Given a mention $m_t$ and the corresponding candidate set $\lbrace e_t^1, e_t^2,..., \\ e_t^k\rbrace $ , we aim to get their local representation based on the mention context and the candidate entity description. For each mention, we firstly select its $n$ surrounding words, and represent them as word embedding using a pre-trained lookup table BIBREF11 . Then, we use Long Short-Term Memory (LSTM) networks to encode the contextual word sequence $\lbrace w_c^1, w_c^2,..., w_c^n\rbrace $ as a fixed-size vector $V_{m_t}$ . The description of entity is encoded as $D_{e_t^i}$ in the same way. Apart from the description of entity, there are many other valuable information in the knowledge base. To make full use of these information, many researchers trained entity embeddings by combining the description, category, and relationship of entities. As shown in BIBREF0 , entity embeddings compress the semantic meaning of entities and drastically reduce the need for manually designed features or co-occurrence statistics. Therefore, we use the pre-trained entity embedding $E_{e_t^i}$ and concatenate it with the description vector $D_{e_t^i}$ to enrich the entity representation. The concatenation result is denoted by $V_{e_t^i}$ .
After getting $V_{e_t^i}$ , we concatenate it with $V_{m_t}$ and then pass the concatenation result to a multilayer perceptron (MLP). The MLP outputs a scalar to represent the local similarity between the mention $m_t$ and the candidate entity $e_t^i$ . The local similarity is calculated by the following equations:
$$\Psi (m_t, e_t^i) = MLP(V_{m_t}\oplus {V_{e_t^i}})$$ (Eq. 9)
Where $\oplus $ indicates vector concatenation. With the purpose of distinguishing the correct target entity and wrong candidate entities when training the local encoder model, we utilize a hinge loss that ranks ground truth higher than others. The rank loss function is defined as follows:
$$L_{local} = max(0, \gamma -\Psi (m_t, e_t^+)+\Psi (m_t, e_t^-))$$ (Eq. 10)
When optimizing the objective function, we minimize the rank loss similar to BIBREF0 , BIBREF1 . In this ranking model, a training instance is constructed by pairing a positive target entity $e_t^+$ with a negative entity $e_t^-$ . Where $\gamma > 0$ is a margin parameter and our purpose is to make the score of the positive target entity $e_t^+$ is at least a margin $\gamma $ higher than that of negative candidate entity $e_t^-$ .
With the local encoder, we obtain the representation of mention context and candidate entities, which will be used as the input into the global encoder and entity selector. In addition, the similarity scores calculated by MLP will be utilized for ranking mentions in the global encoder.
Global Encoder
In the global encoder module, we aim to enforce the topical coherence among the mentions and their target entities. So, we use an LSTM network which is capable of maintaining the long-term memory to encode the ranked mention sequence. What we need to emphasize is that our global encoder just encode the mentions that have been disambiguated by the entity selector which is denoted as $V_{a_t}$ .
As mentioned above, the mentions should be sorted according to their contextual information and topical coherence. So, we firstly divide the adjacent mentions into a segment by the order they appear in the document based on the observation that the topical consistency attenuates along with the distance between the mentions. Then, we sort mentions in a segment based on the local similarity and place the mention that has a higher similarity value in the front of the sequence. In Equation 1, we define the local similarity of $m_i$ and its corresponding candidate entity $e_t^i$ . On this basis, we define $\Psi _{max}(m_i, e_i^a)$ as the the maximum local similarity between the $m_i$ and its candidate set $C_{m_i} = \lbrace e_i^1, e_i^2,..., e_i^n\rbrace $ . We use $\Psi _{max}(m_i, e_i^a)$ as criterion when sorting mentions. For instance, if $\Psi _{max}(m_i, e_i^a) > \Psi _{max}(m_j, e_j^b)$ then we place $m_i$ before $m_j$ . Under this circumstances, the mentions in the front positions may not be able to make better use of global consistency, but their target entities have a high degree of similarity to the context words, which allows them to be disambiguated without relying on additional information. In the end, previous selected target entity information is encoded by global encoder and the encoding result will be served as input to the entity selector.
Before using entity selector to choose target entities, we pre-trained the global LSTM network. During the training process, we input not only positive samples but also negative ones to the LSTM. By doing this, we can enhance the robustness of the network. In the global encoder module, we adopt the following cross entropy loss function to train the model.
$$L_{global} = -\frac{1}{n}\sum _x{\left[y\ln {y^{^{\prime }}} + (1-y)\ln (1-y^{^{\prime }})\right]}$$ (Eq. 12)
Where $y\in \lbrace 0,1\rbrace $ represents the label of the candidate entity. If the candidate entity is correct $y=1$ , otherwise $y=0$ . $y^{^{\prime }}\in (0,1)$ indicates the output of our model. After pre-training the global encoder, we start using the entity selector to choose the target entity for each mention and encode these selections.
Entity Selector
In the entity selector module, we choose the target entity from candidate set based on the results of local and global encoder. In the process of sequence disambiguation, each selection result will have an impact on subsequent decisions. Therefore, we transform the choice of the target entity into a reinforcement learning problem and view the entity selector as an agent. In particular, the agent is designed as a policy network which can learn a stochastic policy and prevents the agent from getting stuck at an intermediate state BIBREF12 . Under the guidance of policy, the agent can decide which action (choosing the target entity from the candidate set)should be taken at each state, and receive a delay reward when all the selections are made. In the following part, we first describe the state, action and reward. Then, we detail how to select target entity via a policy network.
The result of entity selection is based on the current state information. For time $t$ , the state vector $S_t$ is generated as follows:
$$S_t = V_{m_i}^t\oplus {V_{e_i}^t}\oplus {V_{feature}^t}\oplus {V_{e^*}^{t-1}}$$ (Eq. 15)
Where $\oplus $ indicates vector concatenation. The $V_{m_i}^t$ and $V_{e_i}^t$ respectively denote the vector of $m_i$ and $e_i$ at time $t$ . For each mention, there are multiple candidate entities correspond to it. With the purpose of comparing the semantic relevance between the mention and each candidate entity at the same time, we copy multiple copies of the mention vector. Formally, we extend $V_{m_i}^t \in \mathbb {R}^{1\times {n}}$ to $V_{m_i}^t{^{\prime }} \in \mathbb {R}^{k\times {n}}$ and then combine it with $V_{e_i}^t \in \mathbb {R}^{k\times {n}}$ . Since $V_{m_i}^t$ and $V_{m_i}^t$0 are mainly to represent semantic information, we add feature vector $V_{m_i}^t$1 to enrich lexical and statistical features. These features mainly include the popularity of the entity, the edit distance between the entity description and the mention context, the number of identical words in the entity description and the mention context etc. After getting these feature values, we combine them into a vector and add it to the current state. In addition, the global vector $V_{m_i}^t$2 is also added to $V_{m_i}^t$3 . As mentioned in global encoder module, $V_{m_i}^t$4 is the output of global LSTM network at time $V_{m_i}^t$5 , which encodes the mention context and target entity information from $V_{m_i}^t$6 to $V_{m_i}^t$7 . Thus, the state $V_{m_i}^t$8 contains current information and previous decisions, while also covering the semantic representations and a variety of statistical features. Next, the concatenated vector will be fed into the policy network to generate action.
According to the status at each time step, we take corresponding action. Specifically, we define the action at time step $t$ is to select the target entity $e_t^*$ for $m_t$ . The size of action space is the number of candidate entities for each mention, where $a_i \in \lbrace 0,1,2...k\rbrace $ indicates the position of the selected entity in the candidate entity list. Clearly, each action is a direct indicator of target entity selection in our model. After completing all the actions in the sequence we will get a delayed reward.
The agent takes the reward value as the feedback of its action and learns the policy based on it. Since current selection result has a long-term impact on subsequent decisions, we don't give an immediate reward when taking an action. Instead, a delay reward is given by follows, which can reflect whether the action improves the overall performance or not.
$$R(a_t) = p(a_t)\sum _{j=t}^{T}p(a_j) + (1 - p(a_t))(\sum _{j=t}^{T}p(a_j) + t - T)$$ (Eq. 16)
where $p(a_t)\in \lbrace 0,1\rbrace $ indicates whether the current action is correct or not. When the action is correct $p(a_t)=1$ otherwise $p(a_t)=0$ . Hence $\sum _{j=t}^{T}p(a_j)$ and $\sum _{j=t}^{T}p(a_j) + t - T$ respectively represent the number of correct and wrong actions from time t to the end of episode. Based on the above definition, our delayed reward can be used to guide the learning of the policy for entity linking.
After defining the state, action, and reward, our main challenge becomes to choose an action from the action space. To solve this problem, we sample the value of each action by a policy network $\pi _{\Theta }(a|s)$ . The structure of the policy network is shown in Figure 3. The input of the network is the current state, including the mention context representation, candidate entity representation, feature representation, and encoding of the previous decisions. We concatenate these representations and fed them into a multilayer perceptron, for each hidden layer, we generate the output by:
$$h_i(S_t) = Relu(W_i*h_{i-1}(S_t) + b_i)$$ (Eq. 17)
Where $W_i$ and $ b_i$ are the parameters of the $i$ th hidden layer, through the $relu$ activation function we get the $h_i(S_t)$ . After getting the output of the last hidden layer, we feed it into a softmax layer which generates the probability distribution of actions. The probability distribution is generated as follows:
$$\pi (a|s) = Softmax(W * h_l(S) + b)$$ (Eq. 18)
Where the $W$ and $b$ are the parameters of the softmax layer. For each mention in the sequence, we will take action to select the target entity from its candidate set. After completing all decisions in the episode, each action will get an expected reward and our goal is to maximize the expected total rewards. Formally, the objective function is defined as:
$$\begin{split} J(\Theta ) &= \mathbb {E}_{(s_t, a_t){\sim }P_\Theta {(s_t, a_t)}}R(s_1{a_1}...s_L{a_L}) \\ &=\sum _{t}\sum _{a}\pi _{\Theta }(a|s)R(a_t) \end{split}$$ (Eq. 19)
Where $P_\Theta {(s_t, a_t)}$ is the state transfer function, $\pi _{\Theta }(a|s)$ indicates the probability of taking action $a$ under the state $s$ , $R(a_t)$ is the expected reward of action $a$ at time step $t$ . According to REINFORCE policy gradient algorithm BIBREF13 , we update the policy gradient by the way of equation 9.
$$\Theta \leftarrow \Theta + \alpha \sum _{t}R(a_t)\nabla _{\Theta }\log \pi _{\Theta }(a|s)$$ (Eq. 20)
As the global encoder and the entity selector are correlated mutually, we train them jointly after pre-training the two networks. The details of the joint learning are presented in Algorithm 1.
[t] The Policy Learning for Entity Selector [1] Training data include multiple documents $D = \lbrace D_1, D_2, ..., D_N\rbrace $ The target entity for mentions $\Gamma = \lbrace T_1, T_2, ..., T_N\rbrace $
Initialize the policy network parameter $\Theta $ , global LSTM network parameter $\Phi $ ; $D_k$ in $D$ Generate the candidate set for each mention Divide the mentions in $D_k$ into multiple sequences $S = \lbrace S_1, S_2, ..., S_N\rbrace $ ; $S_k$ in $S$ Rank the mentions $M = \lbrace m_1, m_2, ..., m_n\rbrace $ in $S_k$ based on the local similarity; $\Phi $0 in $\Phi $1 Sample the target entity $\Phi $2 for $\Phi $3 with $\Phi $4 ; Input the $\Phi $5 and $\Phi $6 to global LSTM network; $\Phi $7 End of sampling, update parameters Compute delayed reward $\Phi $8 for each action; Update the parameter $\Phi $9 of policy network:
$\Theta \leftarrow \Theta + \alpha \sum _{t}R(a_t)\nabla _{\Theta }\log \pi _{\Theta }(a|s)$
Update the parameter $\Phi $ in the global LSTM network
Experiment
In order to evaluate the effectiveness of our method, we train the RLEL model and validate it on a series of popular datasets that are also used by BIBREF0 , BIBREF1 . To avoid overfitting with one dataset, we use both AIDA-Train and Wikipedia data in the training set. Furthermore, we compare the RLEL with some baseline methods, where our model achieves the state-of-the-art results. We implement our models in Tensorflow and run experiments on 4 Tesla V100 GPU.
Experiment Setup
We conduct experiments on several different types of public datasets including news and encyclopedia corpus. The training set is AIDA-Train and Wikipedia datasets, where AIDA-Train contains 18448 mentions and Wikipedia contains 25995 mentions. In order to compare with the previous methods, we evaluate our model on AIDA-B and other datasets. These datasets are well-known and have been used for the evaluation of most entity linking systems. The statistics of the datasets are shown in Table 1.
AIDA-CoNLL BIBREF14 is annotated on Reuters news articles. It contains training (AIDA-Train), validation (AIDA-A) and test (AIDA-B) sets.
ACE2004 BIBREF15 is a subset of the ACE2004 Coreference documents.
MSNBC BIBREF16 contains top two stories in the ten news categories(Politics, Business, Sports etc.)
AQUAINT BIBREF17 is a news corpus from the Xinhua News Service, the New York Times, and the Associated Press.
WNED-CWEB BIBREF18 is randomly picked from the FACC1 annotated ClueWeb 2012 dataset.
WNED-WIKI BIBREF18 is crawled from Wikipedia pages with its original hyperlink annotation.
OURSELF-WIKI is crawled by ourselves from Wikipedia pages.
During the training of our RLEL model, we select top K candidate entities for each mention to optimize the memory and run time. In the top K candidate list, we define the recall of correct target entity is $R_t$ . According to our statistics, when K is set to 1, $R_t$ is 0.853, when K is 5, $R_t$ is 0.977, when K increases to 10, $R_t$ is 0.993. Empirically, we choose top 5 candidate entities as the input of our RLEL model. For the entity description, there are lots of redundant information in the wikipedia page, to reduce the impact of noise data, we use TextRank algorithm BIBREF19 to select 15 keywords as description of the entity. Simultaneously, we choose 15 words around mention as its context. In the global LSTM network, when the number of mentions does not reach the set length, we adopt the mention padding strategy. In short, we copy the last mention in the sequence until the number of mentions reaches the set length.
We set the dimensions of word embedding and entity embedding to 300, where the word embedding and entity embedding are released by BIBREF20 and BIBREF0 respectively. For parameters of the local LSTM network, the number of LSTM cell units is set to 512, the batch size is 64, and the rank margin $\gamma $ is 0.1. Similarly, in global LSTM network, the number of LSTM cell units is 700 and the batch size is 16. In the above two LSTM networks, the learning rate is set to 1e-3, the probability of dropout is set to 0.8, and the Adam is utilized as optimizer. In addition, we set the number of MLP layers to 4 and extend the priori feature dimension to 50 in the policy network.
Comparing with Previous Work
We compare RLEL with a series of EL systems which report state-of-the-art results on the test datasets. There are various methods including classification model BIBREF17 , rank model BIBREF21 , BIBREF15 and probability graph model BIBREF18 , BIBREF14 , BIBREF22 , BIBREF0 , BIBREF1 . Except that, Cheng $et$ $al.$ BIBREF23 formulate their global decision problem as an Integer Linear Program (ILP) which incorporates the entity-relation inference. Globerson $et$ $al.$ BIBREF24 introduce a multi-focal attention model which allows each candidate to focus on limited mentions, Yamada $et$ $al.$ BIBREF25 propose a word and entity embedding model specifically designed for EL.
We use the standard Accuracy, Precision, Recall and F1 at mention level (Micro) as the evaluation metrics:
$$Accuracy = \frac{|M \cap M^*|}{|M \cup M^*|}$$ (Eq. 31)
$$Precision = \frac{|M \cap M^*|}{|M|}$$ (Eq. 32)
where $M^*$ is the golden standard set of the linked name mentions, $M$ is the set of linked name mentions outputted by an EL method.
Same as previous work, we use in-KB accuracy and micro F1 to evaluate our method. We first test the model on the AIDA-B dataset. From Table 2, we can observe that our model achieves the best result. Previous best results on this dataset are generated by BIBREF0 , BIBREF1 which both built CRF models. They calculate the pairwise scores between all candidate entities. Differently, our model only considers the consistency of the target entities and ignores the relationship between incorrect candidates. The experimental results show that our model can reduce the impact of noise data and improve the accuracy of disambiguation. Apart from experimenting on AIDA-B, we also conduct experiments on several different datasets to verify the generalization performance of our model.
From Table 3, we can see that RLEL has achieved relatively good performances on ACE2004, CWEB and WIKI. At the same time, previous models BIBREF0 , BIBREF1 , BIBREF23 achieve better performances on the news datasets such as MSNBC and AQUINT, but their results on encyclopedia datasets such as WIKI are relatively poor. To avoid overfitting with some datasets and improve the robustness of our model, we not only use AIDA-Train but also add Wikipedia data to the training set. In the end, our model achieve the best overall performance.
For most existing EL systems, entities with lower frequency are difficult to disambiguate. To gain further insight, we analyze the accuracy of the AIDA-B dataset for situations where gold entities have low popularity. We divide the gold entities according to their pageviews in wikipedia, the statistical disambiguation results are shown in Table 4. Since some pageviews can not be obtained, we only count part of gold entities. The result indicates that our model is still able to work well for low-frequency entities. But for medium-frequency gold entities, our model doesn't work well enough. The most important reason is that other candidate entities corresponding to these medium-frequency gold entities have higher pageviews and local similarities, which makes the model difficult to distinguish.
Discussion on different RLEL variants
To demonstrate the effects of RLEL, we evaluate our model under different conditions. First, we evaluate the effect of sequence length on global decision making. Second, we assess whether sorting the mentions have a positive effect on the results. Third, we analysis the results of not adding globally encoding during entity selection. Last, we compare our RL selection strategy with the greedy choice.
A document may contain multiple topics, so we do not add all mentions to a single sequence. In practice, we add some adjacent mentions to the sequence and use reinforcement learning to select entities from beginning to end. To analysis the impact of the number of mentions on joint disambiguation, we experiment with sequences on different lengths. The results on AIDA-B are shown in Figure 4. We can see that when the sequence is too short or too long, the disambiguation results are both very poor. When the sequence length is less than 3, delay reward can't work in reinforcement learning, and when the sequence length reaches 5 or more, noise data may be added. Finally, we choose the 4 adjacent mentions to form a sequence.
In this section, we test whether ranking mentions is helpful for entity selections. At first, we directly input them into the global encoder by the order they appear in the text. We record the disambiguation results and compare them with the method which adopts ranking mentions. As shown in Figure 5a, the model with ranking mentions has achieved better performances on most of datasets, indicating that it is effective to place the mention that with a higher local similarity in front of the sequence. It is worth noting that the effect of ranking mentions is not obvious on the MSNBC dataset, the reason is that most of mentions in MSNBC have similar local similarities, the order of disambiguation has little effect on the final result.
Most of previous methods mainly use the similarities between entities to correlate each other, but our model associates them by encoding the selected entity information. To assess whether the global encoding contributes to disambiguation rather than add noise, we compare the performance with and without adding the global information. When the global encoding is not added, the current state only contains the mention context representation, candidate entity representation and feature representation, notably, the selected target entity information is not taken into account. From the results in Figure 5b, we can see that the model with global encoding achieves an improvement of 4% accuracy over the method that without global encoding.
To illustrate the necessity for adopting the reinforcement learning for entity selection, we compare two entity selection strategies like BIBREF5 . Specifically, we perform entity selection respectively with reinforcement learning and greedy choice. The greedy choice is to select the entity with largest local similarity from candidate set. But the reinforcement learning selection is guided by delay reward, which has a global perspective. In the comparative experiment, we keep the other conditions consistent, just replace the RL selection with a greedy choice. Based on the results in Figure 5c, we can draw a conclusion that our entity selector perform much better than greedy strategies.
Case Study
Table 5 shows two entity selection examples by our RLEL model. For multiple mentions appearing in the document, we first sort them according to their local similarities, and select the target entities in order by the reinforcement learning model. From the results of sorting and disambiguation, we can see that our model is able to utilize the topical consistency between mentions and make full use of the selected target entity information.
Related Work
The related work can be roughly divided into two groups: entity linking and reinforcement learning.
Entity Linking
Entity linking falls broadly into two major approaches: local and global disambiguation. Early studies use local models to resolve mentions independently, they usually disambiguate mentions based on lexical matching between the mention's surrounding words and the entity profile in the reference KB. Various methods have been proposed to model mention's local context ranging from binary classification BIBREF17 to rank models BIBREF26 , BIBREF27 . In these methods, a large number of hand-designed features are applied. For some marginal mentions that are difficult to extract features, researchers also exploit the data retrieved by search engines BIBREF28 , BIBREF29 or Wikipedia sentences BIBREF30 . However, the feature engineering and search engine methods are both time-consuming and laborious. Recently, with the popularity of deep learning models, representation learning is utilized to automatically find semantic features BIBREF31 , BIBREF32 . The learned entity representations which by jointly modeling textual contexts and knowledge base are effective in combining multiple sources of information. To make full use of the information contained in representations, we also utilize the pre-trained entity embeddings in our model.
In recent years, with the assumption that the target entities of all mentions in a document shall be related, many novel global models for joint linking are proposed. Assuming the topical coherence among mentions, authors in BIBREF33 , BIBREF34 construct factor graph models, which represent the mention and candidate entities as variable nodes, and exploit factor nodes to denote a series of features. Two recent studies BIBREF0 , BIBREF1 use fully-connected pairwise Conditional Random Field(CRF) model and exploit loopy belief propagation to estimate the max-marginal probability. Moreover, PageRank or Random Walk BIBREF35 , BIBREF18 , BIBREF7 are utilized to select the target entity for each mention. The above probabilistic models usually need to predefine a lot of features and are difficult to calculate the max-marginal probability as the number of nodes increases. In order to automatically learn features from the data, Cao et al. BIBREF9 applies Graph Convolutional Network to flexibly encode entity graphs. However, the graph-based methods are computationally expensive because there are lots of candidate entity nodes in the graph.
To reduce the calculation between candidate entity pairs, Globerson et al. BIBREF24 introduce a coherence model with an attention mechanism, where each mention only focus on a fixed number of mentions. Unfortunately, choosing the number of attention mentions is not easy in practice. Two recent studies BIBREF8 , BIBREF36 finish linking all mentions by scanning the pairs of mentions at most once, they assume each mention only needs to be consistent with one another mention in the document. The limitation of their method is that the consistency information is too sparse, resulting in low confidence. Similar to us, Guo et al. BIBREF18 also sort mentions according to the difficulty of disambiguation, but they did not make full use of the information of previously referred entities for the subsequent entity disambiguation. Nguyen et al. BIBREF2 use the sequence model, but they simply encode the results of the greedy choice, and measure the similarities between the global encoding and the candidate entity representations. Their model does not consider the long-term impact of current decisions on subsequent choices, nor does they add the selected target entity information to the current state to help disambiguation.
Reinforcement Learning
In the last few years, reinforcement learning has emerged as a powerful tool for solving complex sequential decision-making problems. It is well known for its great success in the game field, such as Go BIBREF37 and Atari games BIBREF38 . Recently, reinforcement learning has also been successfully applied to many natural language processing tasks and achieved good performance BIBREF12 , BIBREF39 , BIBREF5 . Feng et al. BIBREF5 used reinforcement learning for relation classification task by filtering out the noisy data from the sentence bag and they achieved huge improvements compared with traditional classifiers. Zhang et al. BIBREF40 applied the reinforcement learning on sentence representation by automatically discovering task-relevant structures. To automatic taxonomy induction from a set of terms, Han et al. BIBREF41 designed an end-to-end reinforcement learning model to determine which term to select and where to place it on the taxonomy, which effectively reduced the error propagation between two phases. Inspired by the above works, we also add reinforcement learning to our framework.
Conclusions
In this paper we consider entity linking as a sequence decision problem and present a reinforcement learning based model. Our model learns the policy on selecting target entities in a sequential manner and makes decisions based on current state and previous ones. By utilizing the information of previously referred entities, we can take advantage of global consistency to disambiguate mentions. For each selection result in the current state, it also has a long-term impact on subsequent decisions, which allows learned policy strategy has a global view. In experiments, we evaluate our method on AIDA-B and other well-known datasets, the results show that our system outperforms state-of-the-art solutions. In the future, we would like to use reinforcement learning to detect mentions and determine which mention should be firstly disambiguated in the document.
This research is supported by the GS501100001809National Key Research and Development Program of China (No. GS5011000018092018YFB1004703), GS501100001809the Beijing Municipal Science and Technology Project under grant (No. GS501100001809
Z181100002718004), and GS501100001809the National Natural Science Foundation of China grants(No. GS50110000180961602466). | AIDA-CoNLL, ACE2004, MSNBC, AQUAINT, WNED-CWEB, WNED-WIKI, OURSELF-WIKI |
0481a8edf795768d062c156875d20b8fb656432c | 0481a8edf795768d062c156875d20b8fb656432c_0 | Q: what are the mentioned cues?
Text: Introduction
Entity Linking (EL), which is also called Entity Disambiguation (ED), is the task of mapping mentions in text to corresponding entities in a given knowledge Base (KB). This task is an important and challenging stage in text understanding because mentions are usually ambiguous, i.e., different named entities may share the same surface form and the same entity may have multiple aliases. EL is key for information retrieval (IE) and has many applications, such as knowledge base population (KBP), question answering (QA), etc.
Existing EL methods can be divided into two categories: local model and global model. Local models concern mainly on contextual words surrounding the mentions, where mentions are disambiguated independently. These methods are not work well when the context information is not rich enough. Global models take into account the topical coherence among the referred entities within the same document, where mentions are disambiguated jointly. Most of previous global models BIBREF0 , BIBREF1 , BIBREF2 calculate the pairwise scores between all candidate entities and select the most relevant group of entities. However, the consistency among wrong entities as well as that among right ones are involved, which not only increases the model complexity but also introduces some noises. For example, in Figure 1, there are three mentions "France", "Croatia" and "2018 World Cup", and each mention has three candidate entities. Here, "France" may refer to French Republic, France national basketball team or France national football team in KB. It is difficult to disambiguate using local models, due to the scarce common information in the contextual words of "France" and the descriptions of its candidate entities. Besides, the topical coherence among the wrong entities related to basketball team (linked by an orange dashed line) may make the global models mistakenly refer "France" to France national basketball team. So, how to solve these problems?
We note that, mentions in text usually have different disambiguation difficulty according to the quality of contextual information and the topical coherence. Intuitively, if we start with mentions that are easier to disambiguate and gain correct results, it will be effective to utilize information provided by previously referred entities to disambiguate subsequent mentions. In the above example, it is much easier to map "2018 World Cup" to 2018 FIFA World Cup based on their common contextual words "France", "Croatia", "4-2". Then, it is obvious that "France" and "Croatia" should be referred to the national football team because football-related terms are mentioned many times in the description of 2018 FIFA World Cup.
Inspired by this intuition, we design the solution with three principles: (i) utilizing local features to rank the mentions in text and deal with them in a sequence manner; (ii) utilizing the information of previously referred entities for the subsequent entity disambiguation; (iii) making decisions from a global perspective to avoid the error propagation if the previous decision is wrong.
In order to achieve these aims, we consider global EL as a sequence decision problem and proposed a deep reinforcement learning (RL) based model, RLEL for short, which consists of three modules: Local Encoder, Global Encoder and Entity Selector. For each mention and its candidate entities, Local Encoder encodes the local features to obtain their latent vector representations. Then, the mentions are ranked according to their disambiguation difficulty, which is measured by the learned vector representations. In order to enforce global coherence between mentions, Global Encoder encodes the local representations of mention-entity pairs in a sequential manner via a LSTM network, which maintains a long-term memory on features of entities which has been selected in previous states. Entity Selector uses a policy network to choose the target entities from the candidate set. For a single disambiguation decision, the policy network not only considers the pairs of current mention-entity representations, but also concerns the features of referred entities in the previous states which is pursued by the Global Encoder. In this way, Entity Selector is able to take actions based on the current state and previous ones. When eliminating the ambiguity of all mentions in the sequence, delayed rewards are used to adjust its policy in order to gain an optimized global decision.
Deep RL model, which learns to directly optimize the overall evaluation metrics, works much better than models which learn with loss functions that just evaluate a particular single decision. By this property, RL has been successfully used in many NLP tasks, such as information retrieval BIBREF3 , dialogue system BIBREF4 and relation classification BIBREF5 , etc. To the best of our knowledge, we are the first to design a RL model for global entity linking. And in this paper, our RL model is able to produce more accurate results by exploring the long-term influence of independent decisions and encoding the entities disambiguated in previous states.
In summary, the main contributions of our paper mainly include following aspects:
Methodology
The overall structure of our RLEL model is shown in Figure 2. The proposed framework mainly includes three parts: Local Encoder which encodes local features of mentions and their candidate entities, Global Encoder which encodes the global coherence of mentions in a sequence manner and Entity Selector which selects an entity from the candidate set. As the Entity Selector and the Global Encoder are correlated mutually, we train them jointly. Moreover, the Local Encoder as the basis of the entire framework will be independently trained before the joint training process starts. In the following, we will introduce the technical details of these modules.
Preliminaries
Before introducing our model, we firstly define the entity linking task. Formally, given a document $D$ with a set of mentions $M = \lbrace m_1, m_2,...,m_k\rbrace $ , each mention $ m_t \in D$ has a set of candidate entities $C_{m_t} = \lbrace e_{t}^1, e_{t}^2,..., e_{t}^n\rbrace $ . The task of entity linking is to map each mention $m_t$ to its corresponding correct target entity $e_{t}^+$ or return "NIL" if there is not correct target entity in the knowledge base. Before selecting the target entity, we need to generate a certain number of candidate entities for model selection.
Inspired by the previous works BIBREF6 , BIBREF7 , BIBREF8 , we use the mention's redirect and disambiguation pages in Wikipedia to generate candidate sets. For those mentions without corresponding disambiguation pages, we use its n-grams to retrieve the candidates BIBREF8 . In most cases, the disambiguation page contains many entities, sometimes even hundreds. To optimize the model's memory and avoid unnecessary calculations, the candidate sets need to be filtered BIBREF9 , BIBREF0 , BIBREF1 . Here we utilize the XGBoost model BIBREF10 as an entity ranker to reduce the size of candidate set. The features used in XGBoost can be divided into two aspects, the one is string similarity like the Jaro-Winkler distance between the entity title and the mention, the other is semantic similarity like the cosine distance between the mention context representation and the entity embedding. Furthermore, we also use the statistical features based on the pageview and hyperlinks in Wikipedia. Empirically, we get the pageview of the entity from the Wikipedia Tool Labs which counts the number of visits on each entity page in Wikipedia. After ranking the candidate sets based on the above features, we take the top k scored entities as final candidate set for each mention.
Local Encoder
Given a mention $m_t$ and the corresponding candidate set $\lbrace e_t^1, e_t^2,..., \\ e_t^k\rbrace $ , we aim to get their local representation based on the mention context and the candidate entity description. For each mention, we firstly select its $n$ surrounding words, and represent them as word embedding using a pre-trained lookup table BIBREF11 . Then, we use Long Short-Term Memory (LSTM) networks to encode the contextual word sequence $\lbrace w_c^1, w_c^2,..., w_c^n\rbrace $ as a fixed-size vector $V_{m_t}$ . The description of entity is encoded as $D_{e_t^i}$ in the same way. Apart from the description of entity, there are many other valuable information in the knowledge base. To make full use of these information, many researchers trained entity embeddings by combining the description, category, and relationship of entities. As shown in BIBREF0 , entity embeddings compress the semantic meaning of entities and drastically reduce the need for manually designed features or co-occurrence statistics. Therefore, we use the pre-trained entity embedding $E_{e_t^i}$ and concatenate it with the description vector $D_{e_t^i}$ to enrich the entity representation. The concatenation result is denoted by $V_{e_t^i}$ .
After getting $V_{e_t^i}$ , we concatenate it with $V_{m_t}$ and then pass the concatenation result to a multilayer perceptron (MLP). The MLP outputs a scalar to represent the local similarity between the mention $m_t$ and the candidate entity $e_t^i$ . The local similarity is calculated by the following equations:
$$\Psi (m_t, e_t^i) = MLP(V_{m_t}\oplus {V_{e_t^i}})$$ (Eq. 9)
Where $\oplus $ indicates vector concatenation. With the purpose of distinguishing the correct target entity and wrong candidate entities when training the local encoder model, we utilize a hinge loss that ranks ground truth higher than others. The rank loss function is defined as follows:
$$L_{local} = max(0, \gamma -\Psi (m_t, e_t^+)+\Psi (m_t, e_t^-))$$ (Eq. 10)
When optimizing the objective function, we minimize the rank loss similar to BIBREF0 , BIBREF1 . In this ranking model, a training instance is constructed by pairing a positive target entity $e_t^+$ with a negative entity $e_t^-$ . Where $\gamma > 0$ is a margin parameter and our purpose is to make the score of the positive target entity $e_t^+$ is at least a margin $\gamma $ higher than that of negative candidate entity $e_t^-$ .
With the local encoder, we obtain the representation of mention context and candidate entities, which will be used as the input into the global encoder and entity selector. In addition, the similarity scores calculated by MLP will be utilized for ranking mentions in the global encoder.
Global Encoder
In the global encoder module, we aim to enforce the topical coherence among the mentions and their target entities. So, we use an LSTM network which is capable of maintaining the long-term memory to encode the ranked mention sequence. What we need to emphasize is that our global encoder just encode the mentions that have been disambiguated by the entity selector which is denoted as $V_{a_t}$ .
As mentioned above, the mentions should be sorted according to their contextual information and topical coherence. So, we firstly divide the adjacent mentions into a segment by the order they appear in the document based on the observation that the topical consistency attenuates along with the distance between the mentions. Then, we sort mentions in a segment based on the local similarity and place the mention that has a higher similarity value in the front of the sequence. In Equation 1, we define the local similarity of $m_i$ and its corresponding candidate entity $e_t^i$ . On this basis, we define $\Psi _{max}(m_i, e_i^a)$ as the the maximum local similarity between the $m_i$ and its candidate set $C_{m_i} = \lbrace e_i^1, e_i^2,..., e_i^n\rbrace $ . We use $\Psi _{max}(m_i, e_i^a)$ as criterion when sorting mentions. For instance, if $\Psi _{max}(m_i, e_i^a) > \Psi _{max}(m_j, e_j^b)$ then we place $m_i$ before $m_j$ . Under this circumstances, the mentions in the front positions may not be able to make better use of global consistency, but their target entities have a high degree of similarity to the context words, which allows them to be disambiguated without relying on additional information. In the end, previous selected target entity information is encoded by global encoder and the encoding result will be served as input to the entity selector.
Before using entity selector to choose target entities, we pre-trained the global LSTM network. During the training process, we input not only positive samples but also negative ones to the LSTM. By doing this, we can enhance the robustness of the network. In the global encoder module, we adopt the following cross entropy loss function to train the model.
$$L_{global} = -\frac{1}{n}\sum _x{\left[y\ln {y^{^{\prime }}} + (1-y)\ln (1-y^{^{\prime }})\right]}$$ (Eq. 12)
Where $y\in \lbrace 0,1\rbrace $ represents the label of the candidate entity. If the candidate entity is correct $y=1$ , otherwise $y=0$ . $y^{^{\prime }}\in (0,1)$ indicates the output of our model. After pre-training the global encoder, we start using the entity selector to choose the target entity for each mention and encode these selections.
Entity Selector
In the entity selector module, we choose the target entity from candidate set based on the results of local and global encoder. In the process of sequence disambiguation, each selection result will have an impact on subsequent decisions. Therefore, we transform the choice of the target entity into a reinforcement learning problem and view the entity selector as an agent. In particular, the agent is designed as a policy network which can learn a stochastic policy and prevents the agent from getting stuck at an intermediate state BIBREF12 . Under the guidance of policy, the agent can decide which action (choosing the target entity from the candidate set)should be taken at each state, and receive a delay reward when all the selections are made. In the following part, we first describe the state, action and reward. Then, we detail how to select target entity via a policy network.
The result of entity selection is based on the current state information. For time $t$ , the state vector $S_t$ is generated as follows:
$$S_t = V_{m_i}^t\oplus {V_{e_i}^t}\oplus {V_{feature}^t}\oplus {V_{e^*}^{t-1}}$$ (Eq. 15)
Where $\oplus $ indicates vector concatenation. The $V_{m_i}^t$ and $V_{e_i}^t$ respectively denote the vector of $m_i$ and $e_i$ at time $t$ . For each mention, there are multiple candidate entities correspond to it. With the purpose of comparing the semantic relevance between the mention and each candidate entity at the same time, we copy multiple copies of the mention vector. Formally, we extend $V_{m_i}^t \in \mathbb {R}^{1\times {n}}$ to $V_{m_i}^t{^{\prime }} \in \mathbb {R}^{k\times {n}}$ and then combine it with $V_{e_i}^t \in \mathbb {R}^{k\times {n}}$ . Since $V_{m_i}^t$ and $V_{m_i}^t$0 are mainly to represent semantic information, we add feature vector $V_{m_i}^t$1 to enrich lexical and statistical features. These features mainly include the popularity of the entity, the edit distance between the entity description and the mention context, the number of identical words in the entity description and the mention context etc. After getting these feature values, we combine them into a vector and add it to the current state. In addition, the global vector $V_{m_i}^t$2 is also added to $V_{m_i}^t$3 . As mentioned in global encoder module, $V_{m_i}^t$4 is the output of global LSTM network at time $V_{m_i}^t$5 , which encodes the mention context and target entity information from $V_{m_i}^t$6 to $V_{m_i}^t$7 . Thus, the state $V_{m_i}^t$8 contains current information and previous decisions, while also covering the semantic representations and a variety of statistical features. Next, the concatenated vector will be fed into the policy network to generate action.
According to the status at each time step, we take corresponding action. Specifically, we define the action at time step $t$ is to select the target entity $e_t^*$ for $m_t$ . The size of action space is the number of candidate entities for each mention, where $a_i \in \lbrace 0,1,2...k\rbrace $ indicates the position of the selected entity in the candidate entity list. Clearly, each action is a direct indicator of target entity selection in our model. After completing all the actions in the sequence we will get a delayed reward.
The agent takes the reward value as the feedback of its action and learns the policy based on it. Since current selection result has a long-term impact on subsequent decisions, we don't give an immediate reward when taking an action. Instead, a delay reward is given by follows, which can reflect whether the action improves the overall performance or not.
$$R(a_t) = p(a_t)\sum _{j=t}^{T}p(a_j) + (1 - p(a_t))(\sum _{j=t}^{T}p(a_j) + t - T)$$ (Eq. 16)
where $p(a_t)\in \lbrace 0,1\rbrace $ indicates whether the current action is correct or not. When the action is correct $p(a_t)=1$ otherwise $p(a_t)=0$ . Hence $\sum _{j=t}^{T}p(a_j)$ and $\sum _{j=t}^{T}p(a_j) + t - T$ respectively represent the number of correct and wrong actions from time t to the end of episode. Based on the above definition, our delayed reward can be used to guide the learning of the policy for entity linking.
After defining the state, action, and reward, our main challenge becomes to choose an action from the action space. To solve this problem, we sample the value of each action by a policy network $\pi _{\Theta }(a|s)$ . The structure of the policy network is shown in Figure 3. The input of the network is the current state, including the mention context representation, candidate entity representation, feature representation, and encoding of the previous decisions. We concatenate these representations and fed them into a multilayer perceptron, for each hidden layer, we generate the output by:
$$h_i(S_t) = Relu(W_i*h_{i-1}(S_t) + b_i)$$ (Eq. 17)
Where $W_i$ and $ b_i$ are the parameters of the $i$ th hidden layer, through the $relu$ activation function we get the $h_i(S_t)$ . After getting the output of the last hidden layer, we feed it into a softmax layer which generates the probability distribution of actions. The probability distribution is generated as follows:
$$\pi (a|s) = Softmax(W * h_l(S) + b)$$ (Eq. 18)
Where the $W$ and $b$ are the parameters of the softmax layer. For each mention in the sequence, we will take action to select the target entity from its candidate set. After completing all decisions in the episode, each action will get an expected reward and our goal is to maximize the expected total rewards. Formally, the objective function is defined as:
$$\begin{split} J(\Theta ) &= \mathbb {E}_{(s_t, a_t){\sim }P_\Theta {(s_t, a_t)}}R(s_1{a_1}...s_L{a_L}) \\ &=\sum _{t}\sum _{a}\pi _{\Theta }(a|s)R(a_t) \end{split}$$ (Eq. 19)
Where $P_\Theta {(s_t, a_t)}$ is the state transfer function, $\pi _{\Theta }(a|s)$ indicates the probability of taking action $a$ under the state $s$ , $R(a_t)$ is the expected reward of action $a$ at time step $t$ . According to REINFORCE policy gradient algorithm BIBREF13 , we update the policy gradient by the way of equation 9.
$$\Theta \leftarrow \Theta + \alpha \sum _{t}R(a_t)\nabla _{\Theta }\log \pi _{\Theta }(a|s)$$ (Eq. 20)
As the global encoder and the entity selector are correlated mutually, we train them jointly after pre-training the two networks. The details of the joint learning are presented in Algorithm 1.
[t] The Policy Learning for Entity Selector [1] Training data include multiple documents $D = \lbrace D_1, D_2, ..., D_N\rbrace $ The target entity for mentions $\Gamma = \lbrace T_1, T_2, ..., T_N\rbrace $
Initialize the policy network parameter $\Theta $ , global LSTM network parameter $\Phi $ ; $D_k$ in $D$ Generate the candidate set for each mention Divide the mentions in $D_k$ into multiple sequences $S = \lbrace S_1, S_2, ..., S_N\rbrace $ ; $S_k$ in $S$ Rank the mentions $M = \lbrace m_1, m_2, ..., m_n\rbrace $ in $S_k$ based on the local similarity; $\Phi $0 in $\Phi $1 Sample the target entity $\Phi $2 for $\Phi $3 with $\Phi $4 ; Input the $\Phi $5 and $\Phi $6 to global LSTM network; $\Phi $7 End of sampling, update parameters Compute delayed reward $\Phi $8 for each action; Update the parameter $\Phi $9 of policy network:
$\Theta \leftarrow \Theta + \alpha \sum _{t}R(a_t)\nabla _{\Theta }\log \pi _{\Theta }(a|s)$
Update the parameter $\Phi $ in the global LSTM network
Experiment
In order to evaluate the effectiveness of our method, we train the RLEL model and validate it on a series of popular datasets that are also used by BIBREF0 , BIBREF1 . To avoid overfitting with one dataset, we use both AIDA-Train and Wikipedia data in the training set. Furthermore, we compare the RLEL with some baseline methods, where our model achieves the state-of-the-art results. We implement our models in Tensorflow and run experiments on 4 Tesla V100 GPU.
Experiment Setup
We conduct experiments on several different types of public datasets including news and encyclopedia corpus. The training set is AIDA-Train and Wikipedia datasets, where AIDA-Train contains 18448 mentions and Wikipedia contains 25995 mentions. In order to compare with the previous methods, we evaluate our model on AIDA-B and other datasets. These datasets are well-known and have been used for the evaluation of most entity linking systems. The statistics of the datasets are shown in Table 1.
AIDA-CoNLL BIBREF14 is annotated on Reuters news articles. It contains training (AIDA-Train), validation (AIDA-A) and test (AIDA-B) sets.
ACE2004 BIBREF15 is a subset of the ACE2004 Coreference documents.
MSNBC BIBREF16 contains top two stories in the ten news categories(Politics, Business, Sports etc.)
AQUAINT BIBREF17 is a news corpus from the Xinhua News Service, the New York Times, and the Associated Press.
WNED-CWEB BIBREF18 is randomly picked from the FACC1 annotated ClueWeb 2012 dataset.
WNED-WIKI BIBREF18 is crawled from Wikipedia pages with its original hyperlink annotation.
OURSELF-WIKI is crawled by ourselves from Wikipedia pages.
During the training of our RLEL model, we select top K candidate entities for each mention to optimize the memory and run time. In the top K candidate list, we define the recall of correct target entity is $R_t$ . According to our statistics, when K is set to 1, $R_t$ is 0.853, when K is 5, $R_t$ is 0.977, when K increases to 10, $R_t$ is 0.993. Empirically, we choose top 5 candidate entities as the input of our RLEL model. For the entity description, there are lots of redundant information in the wikipedia page, to reduce the impact of noise data, we use TextRank algorithm BIBREF19 to select 15 keywords as description of the entity. Simultaneously, we choose 15 words around mention as its context. In the global LSTM network, when the number of mentions does not reach the set length, we adopt the mention padding strategy. In short, we copy the last mention in the sequence until the number of mentions reaches the set length.
We set the dimensions of word embedding and entity embedding to 300, where the word embedding and entity embedding are released by BIBREF20 and BIBREF0 respectively. For parameters of the local LSTM network, the number of LSTM cell units is set to 512, the batch size is 64, and the rank margin $\gamma $ is 0.1. Similarly, in global LSTM network, the number of LSTM cell units is 700 and the batch size is 16. In the above two LSTM networks, the learning rate is set to 1e-3, the probability of dropout is set to 0.8, and the Adam is utilized as optimizer. In addition, we set the number of MLP layers to 4 and extend the priori feature dimension to 50 in the policy network.
Comparing with Previous Work
We compare RLEL with a series of EL systems which report state-of-the-art results on the test datasets. There are various methods including classification model BIBREF17 , rank model BIBREF21 , BIBREF15 and probability graph model BIBREF18 , BIBREF14 , BIBREF22 , BIBREF0 , BIBREF1 . Except that, Cheng $et$ $al.$ BIBREF23 formulate their global decision problem as an Integer Linear Program (ILP) which incorporates the entity-relation inference. Globerson $et$ $al.$ BIBREF24 introduce a multi-focal attention model which allows each candidate to focus on limited mentions, Yamada $et$ $al.$ BIBREF25 propose a word and entity embedding model specifically designed for EL.
We use the standard Accuracy, Precision, Recall and F1 at mention level (Micro) as the evaluation metrics:
$$Accuracy = \frac{|M \cap M^*|}{|M \cup M^*|}$$ (Eq. 31)
$$Precision = \frac{|M \cap M^*|}{|M|}$$ (Eq. 32)
where $M^*$ is the golden standard set of the linked name mentions, $M$ is the set of linked name mentions outputted by an EL method.
Same as previous work, we use in-KB accuracy and micro F1 to evaluate our method. We first test the model on the AIDA-B dataset. From Table 2, we can observe that our model achieves the best result. Previous best results on this dataset are generated by BIBREF0 , BIBREF1 which both built CRF models. They calculate the pairwise scores between all candidate entities. Differently, our model only considers the consistency of the target entities and ignores the relationship between incorrect candidates. The experimental results show that our model can reduce the impact of noise data and improve the accuracy of disambiguation. Apart from experimenting on AIDA-B, we also conduct experiments on several different datasets to verify the generalization performance of our model.
From Table 3, we can see that RLEL has achieved relatively good performances on ACE2004, CWEB and WIKI. At the same time, previous models BIBREF0 , BIBREF1 , BIBREF23 achieve better performances on the news datasets such as MSNBC and AQUINT, but their results on encyclopedia datasets such as WIKI are relatively poor. To avoid overfitting with some datasets and improve the robustness of our model, we not only use AIDA-Train but also add Wikipedia data to the training set. In the end, our model achieve the best overall performance.
For most existing EL systems, entities with lower frequency are difficult to disambiguate. To gain further insight, we analyze the accuracy of the AIDA-B dataset for situations where gold entities have low popularity. We divide the gold entities according to their pageviews in wikipedia, the statistical disambiguation results are shown in Table 4. Since some pageviews can not be obtained, we only count part of gold entities. The result indicates that our model is still able to work well for low-frequency entities. But for medium-frequency gold entities, our model doesn't work well enough. The most important reason is that other candidate entities corresponding to these medium-frequency gold entities have higher pageviews and local similarities, which makes the model difficult to distinguish.
Discussion on different RLEL variants
To demonstrate the effects of RLEL, we evaluate our model under different conditions. First, we evaluate the effect of sequence length on global decision making. Second, we assess whether sorting the mentions have a positive effect on the results. Third, we analysis the results of not adding globally encoding during entity selection. Last, we compare our RL selection strategy with the greedy choice.
A document may contain multiple topics, so we do not add all mentions to a single sequence. In practice, we add some adjacent mentions to the sequence and use reinforcement learning to select entities from beginning to end. To analysis the impact of the number of mentions on joint disambiguation, we experiment with sequences on different lengths. The results on AIDA-B are shown in Figure 4. We can see that when the sequence is too short or too long, the disambiguation results are both very poor. When the sequence length is less than 3, delay reward can't work in reinforcement learning, and when the sequence length reaches 5 or more, noise data may be added. Finally, we choose the 4 adjacent mentions to form a sequence.
In this section, we test whether ranking mentions is helpful for entity selections. At first, we directly input them into the global encoder by the order they appear in the text. We record the disambiguation results and compare them with the method which adopts ranking mentions. As shown in Figure 5a, the model with ranking mentions has achieved better performances on most of datasets, indicating that it is effective to place the mention that with a higher local similarity in front of the sequence. It is worth noting that the effect of ranking mentions is not obvious on the MSNBC dataset, the reason is that most of mentions in MSNBC have similar local similarities, the order of disambiguation has little effect on the final result.
Most of previous methods mainly use the similarities between entities to correlate each other, but our model associates them by encoding the selected entity information. To assess whether the global encoding contributes to disambiguation rather than add noise, we compare the performance with and without adding the global information. When the global encoding is not added, the current state only contains the mention context representation, candidate entity representation and feature representation, notably, the selected target entity information is not taken into account. From the results in Figure 5b, we can see that the model with global encoding achieves an improvement of 4% accuracy over the method that without global encoding.
To illustrate the necessity for adopting the reinforcement learning for entity selection, we compare two entity selection strategies like BIBREF5 . Specifically, we perform entity selection respectively with reinforcement learning and greedy choice. The greedy choice is to select the entity with largest local similarity from candidate set. But the reinforcement learning selection is guided by delay reward, which has a global perspective. In the comparative experiment, we keep the other conditions consistent, just replace the RL selection with a greedy choice. Based on the results in Figure 5c, we can draw a conclusion that our entity selector perform much better than greedy strategies.
Case Study
Table 5 shows two entity selection examples by our RLEL model. For multiple mentions appearing in the document, we first sort them according to their local similarities, and select the target entities in order by the reinforcement learning model. From the results of sorting and disambiguation, we can see that our model is able to utilize the topical consistency between mentions and make full use of the selected target entity information.
Related Work
The related work can be roughly divided into two groups: entity linking and reinforcement learning.
Entity Linking
Entity linking falls broadly into two major approaches: local and global disambiguation. Early studies use local models to resolve mentions independently, they usually disambiguate mentions based on lexical matching between the mention's surrounding words and the entity profile in the reference KB. Various methods have been proposed to model mention's local context ranging from binary classification BIBREF17 to rank models BIBREF26 , BIBREF27 . In these methods, a large number of hand-designed features are applied. For some marginal mentions that are difficult to extract features, researchers also exploit the data retrieved by search engines BIBREF28 , BIBREF29 or Wikipedia sentences BIBREF30 . However, the feature engineering and search engine methods are both time-consuming and laborious. Recently, with the popularity of deep learning models, representation learning is utilized to automatically find semantic features BIBREF31 , BIBREF32 . The learned entity representations which by jointly modeling textual contexts and knowledge base are effective in combining multiple sources of information. To make full use of the information contained in representations, we also utilize the pre-trained entity embeddings in our model.
In recent years, with the assumption that the target entities of all mentions in a document shall be related, many novel global models for joint linking are proposed. Assuming the topical coherence among mentions, authors in BIBREF33 , BIBREF34 construct factor graph models, which represent the mention and candidate entities as variable nodes, and exploit factor nodes to denote a series of features. Two recent studies BIBREF0 , BIBREF1 use fully-connected pairwise Conditional Random Field(CRF) model and exploit loopy belief propagation to estimate the max-marginal probability. Moreover, PageRank or Random Walk BIBREF35 , BIBREF18 , BIBREF7 are utilized to select the target entity for each mention. The above probabilistic models usually need to predefine a lot of features and are difficult to calculate the max-marginal probability as the number of nodes increases. In order to automatically learn features from the data, Cao et al. BIBREF9 applies Graph Convolutional Network to flexibly encode entity graphs. However, the graph-based methods are computationally expensive because there are lots of candidate entity nodes in the graph.
To reduce the calculation between candidate entity pairs, Globerson et al. BIBREF24 introduce a coherence model with an attention mechanism, where each mention only focus on a fixed number of mentions. Unfortunately, choosing the number of attention mentions is not easy in practice. Two recent studies BIBREF8 , BIBREF36 finish linking all mentions by scanning the pairs of mentions at most once, they assume each mention only needs to be consistent with one another mention in the document. The limitation of their method is that the consistency information is too sparse, resulting in low confidence. Similar to us, Guo et al. BIBREF18 also sort mentions according to the difficulty of disambiguation, but they did not make full use of the information of previously referred entities for the subsequent entity disambiguation. Nguyen et al. BIBREF2 use the sequence model, but they simply encode the results of the greedy choice, and measure the similarities between the global encoding and the candidate entity representations. Their model does not consider the long-term impact of current decisions on subsequent choices, nor does they add the selected target entity information to the current state to help disambiguation.
Reinforcement Learning
In the last few years, reinforcement learning has emerged as a powerful tool for solving complex sequential decision-making problems. It is well known for its great success in the game field, such as Go BIBREF37 and Atari games BIBREF38 . Recently, reinforcement learning has also been successfully applied to many natural language processing tasks and achieved good performance BIBREF12 , BIBREF39 , BIBREF5 . Feng et al. BIBREF5 used reinforcement learning for relation classification task by filtering out the noisy data from the sentence bag and they achieved huge improvements compared with traditional classifiers. Zhang et al. BIBREF40 applied the reinforcement learning on sentence representation by automatically discovering task-relevant structures. To automatic taxonomy induction from a set of terms, Han et al. BIBREF41 designed an end-to-end reinforcement learning model to determine which term to select and where to place it on the taxonomy, which effectively reduced the error propagation between two phases. Inspired by the above works, we also add reinforcement learning to our framework.
Conclusions
In this paper we consider entity linking as a sequence decision problem and present a reinforcement learning based model. Our model learns the policy on selecting target entities in a sequential manner and makes decisions based on current state and previous ones. By utilizing the information of previously referred entities, we can take advantage of global consistency to disambiguate mentions. For each selection result in the current state, it also has a long-term impact on subsequent decisions, which allows learned policy strategy has a global view. In experiments, we evaluate our method on AIDA-B and other well-known datasets, the results show that our system outperforms state-of-the-art solutions. In the future, we would like to use reinforcement learning to detect mentions and determine which mention should be firstly disambiguated in the document.
This research is supported by the GS501100001809National Key Research and Development Program of China (No. GS5011000018092018YFB1004703), GS501100001809the Beijing Municipal Science and Technology Project under grant (No. GS501100001809
Z181100002718004), and GS501100001809the National Natural Science Foundation of China grants(No. GS50110000180961602466). | output of global LSTM network at time $V_{m_i}^t$5 , which encodes the mention context and target entity information from $V_{m_i}^t$6 to $V_{m_i}^t$7 |
b6a4ab009e6f213f011320155a7ce96e713c11cf | b6a4ab009e6f213f011320155a7ce96e713c11cf_0 | Q: How did the author's work rank among other submissions on the challenge?
Text: Introduction
The BioASQ Challenge includes a question answering task (Phase B, part B) where the aim is to find the “ideal answer” — that is, an answer that would normally be given by a person BIBREF0. This is in contrast with most other question answering challenges where the aim is normally to give an exact answer, usually a fact-based answer or a list. Given that the answer is based on an input that consists of a biomedical question and several relevant PubMed abstracts, the task can be seen as an instance of query-based multi-document summarisation.
As in past participation BIBREF1, BIBREF2, we wanted to test the use of deep learning and reinforcement learning approaches for extractive summarisation. In contrast with past years where the training procedure was based on a regression set up, this year we experiment with various classification set ups. The main contributions of this paper are:
We compare classification and regression approaches and show that classification produces better results than regression but the quality of the results depends on the approach followed to annotate the data labels.
We conduct correlation analysis between various ROUGE evaluation metrics and the human evaluations conducted at BioASQ and show that Precision and F1 correlate better than Recall.
Section SECREF2 briefly introduces some related work for context. Section SECREF3 describes our classification and regression experiments. Section SECREF4 details our experiments using deep learning architectures. Section SECREF5 explains the reinforcement learning approaches. Section SECREF6 shows the results of our correlation analysis between ROUGE scores and human annotations. Section SECREF7 lists the specific runs submitted at BioASQ 7b. Finally, Section SECREF8 concludes the paper.
Related Work
The BioASQ challenge has organised annual challenges on biomedical semantic indexing and question answering since 2013 BIBREF0. Every year there has been a task about semantic indexing (task a) and another about question answering (task b), and occasionally there have been additional tasks. The tasks defined for 2019 are:
Large Scale Online Biomedical Semantic Indexing.
Biomedical Semantic QA involving Information Retrieval (IR), Question Answering (QA), and Summarisation.
Medical Semantic Indexing in Spanish.
BioASQ Task 7b consists of two phases. Phase A provides a biomedical question as an input, and participants are expected to find relevant concepts from designated terminologies and ontologies, relevant articles from PubMed, relevant snippets from the relevant articles, and relevant RDF triples from designated ontologies. Phase B provides a biomedical question and a list of relevant articles and snippets, and participant systems are expected to return the exact answers and the ideal answers. The training data is composed of the test data from all previous years, and amounts to 2,747 samples. There has been considerable research on the use of machine learning approaches for tasks related to text summarisation, especially on single-document summarisation. Abstractive approaches normally use an encoder-decoder architecture and variants of this architecture incorporate attention BIBREF3 and pointer-generator BIBREF4. Recent approaches leveraged the use of pre-trained models BIBREF5. Recent extractive approaches to summarisation incorporate recurrent neural networks that model sequences of sentence extractions BIBREF6 and may incorporate an abstractive component and reinforcement learning during the training stage BIBREF7. But relatively few approaches have been proposed for query-based multi-document summarisation. Table TABREF8 summarises the approaches presented in the proceedings of the 2018 BioASQ challenge.
Classification vs. Regression Experiments
Our past participation in BioASQ BIBREF1, BIBREF2 and this paper focus on extractive approaches to summarisation. Our decision to focus on extractive approaches is based on the observation that a relatively large number of sentences from the input snippets has very high ROUGE scores, thus suggesting that human annotators had a general tendency to copy text from the input to generate the target summaries BIBREF1. Our past participating systems used regression approaches using the following framework:
Train the regressor to predict the ROUGE-SU4 F1 score of the input sentence.
Produce a summary by selecting the top $n$ input sentences.
A novelty in the current participation is the introduction of classification approaches using the following framework.
Train the classifier to predict the target label (“summary” or “not summary”) of the input sentence.
Produce a summary by selecting all sentences predicted as “summary”.
If the total number of sentences selected is less than $n$, select $n$ sentences with higher probability of label “summary”.
Introducing a classifier makes labelling the training data not trivial, since the target summaries are human-generated and they do not have a perfect mapping to the input sentences. In addition, some samples have multiple reference summaries. BIBREF11 showed that different data labelling approaches influence the quality of the final summary, and some labelling approaches may lead to better results than using regression. In this paper we experiment with the following labelling approaches:
: Label as “summary” all sentences from the input text that have a ROUGE score above a threshold $t$.
: Label as “summary” the $m$ input text sentences with highest ROUGE score.
As in BIBREF11, The ROUGE score of an input sentence was the ROUGE-SU4 F1 score of the sentence against the set of reference summaries.
We conducted cross-validation experiments using various values of $t$ and $m$. Table TABREF26 shows the results for the best values of $t$ and $m$ obtained. The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively. To enable a fair comparison we used the same input features in all systems. These input features combine information from the question and the input sentence and are shown in Fig. FIGREF16. The features are based on BIBREF12, and are the same as in BIBREF1, plus the addition of the position of the input snippet. The best SVC and SVR parameters were determined by grid search.
Preliminary experiments showed a relatively high number of cases where the classifier did not classify any of the input sentences as “summary”. To solve this problem, and as mentioned above, the summariser used in Table TABREF26 introduces a backoff step that extracts the $n$ sentences with highest predicted values when the summary has less than $n$ sentences. The value of $n$ is as reported in our prior work and shown in Table TABREF25.
The results confirm BIBREF11's finding that classification outperforms regression. However, the actual choice of optimal labelling scheme was different: whereas in BIBREF11 the optimal labelling was based on a labelling threshold of 0.1, our experiments show a better result when using the top 5 sentences as the target summary. The reason for this difference might be the fact that BIBREF11 used all sentences from the abstracts of the relevant PubMed articles, whereas we use only the snippets as the input to our summariser. Consequently, the number of input sentences is now much smaller. We therefore report the results of using the labelling schema of top 5 snippets in all subsequent classifier-based experiments of this paper.
barchart=[fill=black!20,draw=black] errorbar=[very thin,draw=black!75] sscale=[very thin,draw=black!75]
Deep Learning Models
Based on the findings of Section SECREF3, we apply minimal changes to the deep learning regression models of BIBREF2 to convert them to classification models. In particular, we add a sigmoid activation to the final layer, and use cross-entropy as the loss function. The complete architecture is shown in Fig. FIGREF28.
The bottom section of Table TABREF26 shows the results of several variants of the neural architecture. The table includes a neural regressor (NNR) and a neural classifier (NNC). The neural classifier is trained in two set ups: “NNC top 5” uses classification labels as described in Section SECREF3, and “NNC SU4 F1” uses the regression labels, that is, the ROUGE-SU4 F1 scores of each sentence. Of interest is the fact that “NNC SU4 F1” outperforms the neural regressor. We have not explored this further and we presume that the relatively good results are due to the fact that ROUGE values range between 0 and 1, which matches the full range of probability values that can be returned by the sigmoid activation of the classifier final layer.
Table TABREF26 also shows the standard deviation across the cross-validation folds. Whereas this standard deviation is fairly large compared with the differences in results, in general the results are compatible with the top part of the table and prior work suggesting that classification-based approaches improve over regression-based approaches.
Reinforcement Learning
We also experiment with the use of reinforcement learning techniques. Again these experiments are based on BIBREF2, who uses REINFORCE to train a global policy. The policy predictor uses a simple feedforward network with a hidden layer.
The results reported by BIBREF2 used ROUGE Recall and indicated no improvement with respect to deep learning architectures. Human evaluation results are preferable over ROUGE but these were made available after the publication of the paper. When comparing the ROUGE and human evaluation results (Table TABREF29), we observe an inversion of the results. In particular, the reinforcement learning approaches (RL) of BIBREF2 receive good human evaluation results, and as a matter of fact they are the best of our runs in two of the batches. In contrast, the regression systems (NNR) fare relatively poorly. Section SECREF6 expands on the comparison between the ROUGE and human evaluation scores.
Encouraged by the results of Table TABREF29, we decided to continue with our experiments with reinforcement learning. We use the same features as in BIBREF2, namely the length (in number of sentences) of the summary generated so far, plus the $tf.idf$ vectors of the following:
Candidate sentence;
Entire input to summarise;
Summary generated so far;
Candidate sentences that are yet to be processed; and
Question.
The reward used by REINFORCE is the ROUGE value of the summary generated by the system. Since BIBREF2 observed a difference between the ROUGE values of the Python implementation of ROUGE and the original Perl version (partly because the Python implementation does not include ROUGE-SU4), we compare the performance of our system when trained with each of them. Table TABREF35 summarises some of our experiments. We ran the version trained on Python ROUGE once, and the version trained on Perl twice. The two Perl runs have different results, and one of them clearly outperforms the Python run. However, given the differences of results between the two Perl runs we advice to re-run the experiments multiple times and obtain the mean and standard deviation of the runs before concluding whether there is any statistical difference between the results. But it seems that there may be an improvement of the final evaluation results when training on the Perl ROUGE values, presumably because the final evaluation results are measured using the Perl implementation of ROUGE.
We have also tested the use of word embeddings instead of $tf.idf$ as input features to the policy model, while keeping the same neural architecture for the policy (one hidden layer using the same number of hidden nodes). In particular, we use the mean of word embeddings using 100 and 200 dimensions. These word embeddings were pre-trained using word2vec on PubMed documents provided by the organisers of BioASQ, as we did for the architectures described in previous sections. The results, not shown in the paper, indicated no major improvement, and re-runs of the experiments showed different results on different runs. Consequently, our submission to BioASQ included the original system using $tf.idf$ as input features in all batches but batch 2, as described in Section SECREF7.
Evaluation Correlation Analysis
As mentioned in Section SECREF5, there appears to be a large discrepancy between ROUGE Recall and the human evaluations. This section describes a correlation analysis between human and ROUGE evaluations using the runs of all participants to all previous BioASQ challenges that included human evaluations (Phase B, ideal answers). The human evaluation results were scraped from the BioASQ Results page, and the ROUGE results were kindly provided by the organisers. We compute the correlation of each of the ROUGE metrics (recall, precision, F1 for ROUGE-2 and ROUGE-SU4) against the average of the human scores. The correlation metrics are Pearson, Kendall, and a revised Kendall correlation explained below.
The Pearson correlation between two variables is computed as the covariance of the two variables divided by the product of their standard deviations. This correlation is a good indication of a linear relation between the two variables, but may not be very effective when there is non-linear correlation.
The Spearman rank correlation and the Kendall rank correlation are two of the most popular among metrics that aim to detect non-linear correlations. The Spearman rank correlation between two variables can be computed as the Pearson correlation between the rank values of the two variables, whereas the Kendall rank correlation measures the ordinal association between the two variables using Equation DISPLAY_FORM36.
It is useful to account for the fact that the results are from 28 independent sets (3 batches in BioASQ 1 and 5 batches each year between BioASQ 2 and BioASQ 6). We therefore also compute a revised Kendall rank correlation measure that only considers pairs of variable values within the same set. The revised metric is computed using Equation DISPLAY_FORM37, where $S$ is the list of different sets.
Table TABREF38 shows the results of all correlation metrics. Overall, ROUGE-2 and ROUGE-SU4 give similar correlation values but ROUGE-SU4 is marginally better. Among precision, recall and F1, both precision and F1 are similar, but precision gives a better correlation. Recall shows poor correlation, and virtually no correlation when using the revised Kendall measure. For reporting the evaluation of results, it will be therefore more useful to use precision or F1. However, given the small difference between precision and F1, and given that precision may favour short summaries when used as a function to optimise in a machine learning setting (e.g. using reinforcement learning), it may be best to use F1 as the metric to optimise.
Fig. FIGREF40 shows the scatterplots of ROUGE-SU4 recall, precision and F1 with respect to the average human evaluation. We observe that the relation between ROUGE and the human evaluations is not linear, and that Precision and F1 have a clear correlation.
Submitted Runs
Table TABREF41 shows the results and details of the runs submitted to BioASQ. The table uses ROUGE-SU4 Recall since this is the metric available at the time of writing this paper. However, note that, as explained in Section SECREF6, these results might differ from the final human evaluation results. Therefore we do not comment on the results, other than observing that the “first $n$” baseline produces the same results as the neural regressor. As mentioned in Section SECREF3, the labels used for the classification experiments are the 5 sentences with highest ROUGE-SU4 F1 score.
Conclusions
Macquarie University's participation in BioASQ 7 focused on the task of generating the ideal answers. The runs use query-based extractive techniques and we experiment with classification, regression, and reinforcement learning approaches. At the time of writing there were no human evaluation results, and based on ROUGE-F1 scores under cross-validation on the training data we observed that classification approaches outperform regression approaches. We experimented with several approaches to label the individual sentences for the classifier and observed that the optimal labelling policy for this task differed from prior work.
We also observed poor correlation between ROUGE-Recall and human evaluation metrics and suggest to use alternative automatic evaluation metrics with better correlation, such as ROUGE-Precision or ROUGE-F1. Given the nature of precision-based metrics which could bias the system towards returning short summaries, ROUGE-F1 is probably more appropriate when using at development time, for example for the reward function used by a reinforcement learning system.
Reinforcement learning gives promising results, especially in human evaluations made on the runs submitted to BioASQ 6b. This year we introduced very small changes to the runs using reinforcement learning, and will aim to explore more complex reinforcement learning strategies and more complex neural models in the policy and value estimators. | Unanswerable |
cfffc94518d64cb3c8789395707e4336676e0345 | cfffc94518d64cb3c8789395707e4336676e0345_0 | Q: What approaches without reinforcement learning have been tried?
Text: Introduction
The BioASQ Challenge includes a question answering task (Phase B, part B) where the aim is to find the “ideal answer” — that is, an answer that would normally be given by a person BIBREF0. This is in contrast with most other question answering challenges where the aim is normally to give an exact answer, usually a fact-based answer or a list. Given that the answer is based on an input that consists of a biomedical question and several relevant PubMed abstracts, the task can be seen as an instance of query-based multi-document summarisation.
As in past participation BIBREF1, BIBREF2, we wanted to test the use of deep learning and reinforcement learning approaches for extractive summarisation. In contrast with past years where the training procedure was based on a regression set up, this year we experiment with various classification set ups. The main contributions of this paper are:
We compare classification and regression approaches and show that classification produces better results than regression but the quality of the results depends on the approach followed to annotate the data labels.
We conduct correlation analysis between various ROUGE evaluation metrics and the human evaluations conducted at BioASQ and show that Precision and F1 correlate better than Recall.
Section SECREF2 briefly introduces some related work for context. Section SECREF3 describes our classification and regression experiments. Section SECREF4 details our experiments using deep learning architectures. Section SECREF5 explains the reinforcement learning approaches. Section SECREF6 shows the results of our correlation analysis between ROUGE scores and human annotations. Section SECREF7 lists the specific runs submitted at BioASQ 7b. Finally, Section SECREF8 concludes the paper.
Related Work
The BioASQ challenge has organised annual challenges on biomedical semantic indexing and question answering since 2013 BIBREF0. Every year there has been a task about semantic indexing (task a) and another about question answering (task b), and occasionally there have been additional tasks. The tasks defined for 2019 are:
Large Scale Online Biomedical Semantic Indexing.
Biomedical Semantic QA involving Information Retrieval (IR), Question Answering (QA), and Summarisation.
Medical Semantic Indexing in Spanish.
BioASQ Task 7b consists of two phases. Phase A provides a biomedical question as an input, and participants are expected to find relevant concepts from designated terminologies and ontologies, relevant articles from PubMed, relevant snippets from the relevant articles, and relevant RDF triples from designated ontologies. Phase B provides a biomedical question and a list of relevant articles and snippets, and participant systems are expected to return the exact answers and the ideal answers. The training data is composed of the test data from all previous years, and amounts to 2,747 samples. There has been considerable research on the use of machine learning approaches for tasks related to text summarisation, especially on single-document summarisation. Abstractive approaches normally use an encoder-decoder architecture and variants of this architecture incorporate attention BIBREF3 and pointer-generator BIBREF4. Recent approaches leveraged the use of pre-trained models BIBREF5. Recent extractive approaches to summarisation incorporate recurrent neural networks that model sequences of sentence extractions BIBREF6 and may incorporate an abstractive component and reinforcement learning during the training stage BIBREF7. But relatively few approaches have been proposed for query-based multi-document summarisation. Table TABREF8 summarises the approaches presented in the proceedings of the 2018 BioASQ challenge.
Classification vs. Regression Experiments
Our past participation in BioASQ BIBREF1, BIBREF2 and this paper focus on extractive approaches to summarisation. Our decision to focus on extractive approaches is based on the observation that a relatively large number of sentences from the input snippets has very high ROUGE scores, thus suggesting that human annotators had a general tendency to copy text from the input to generate the target summaries BIBREF1. Our past participating systems used regression approaches using the following framework:
Train the regressor to predict the ROUGE-SU4 F1 score of the input sentence.
Produce a summary by selecting the top $n$ input sentences.
A novelty in the current participation is the introduction of classification approaches using the following framework.
Train the classifier to predict the target label (“summary” or “not summary”) of the input sentence.
Produce a summary by selecting all sentences predicted as “summary”.
If the total number of sentences selected is less than $n$, select $n$ sentences with higher probability of label “summary”.
Introducing a classifier makes labelling the training data not trivial, since the target summaries are human-generated and they do not have a perfect mapping to the input sentences. In addition, some samples have multiple reference summaries. BIBREF11 showed that different data labelling approaches influence the quality of the final summary, and some labelling approaches may lead to better results than using regression. In this paper we experiment with the following labelling approaches:
: Label as “summary” all sentences from the input text that have a ROUGE score above a threshold $t$.
: Label as “summary” the $m$ input text sentences with highest ROUGE score.
As in BIBREF11, The ROUGE score of an input sentence was the ROUGE-SU4 F1 score of the sentence against the set of reference summaries.
We conducted cross-validation experiments using various values of $t$ and $m$. Table TABREF26 shows the results for the best values of $t$ and $m$ obtained. The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively. To enable a fair comparison we used the same input features in all systems. These input features combine information from the question and the input sentence and are shown in Fig. FIGREF16. The features are based on BIBREF12, and are the same as in BIBREF1, plus the addition of the position of the input snippet. The best SVC and SVR parameters were determined by grid search.
Preliminary experiments showed a relatively high number of cases where the classifier did not classify any of the input sentences as “summary”. To solve this problem, and as mentioned above, the summariser used in Table TABREF26 introduces a backoff step that extracts the $n$ sentences with highest predicted values when the summary has less than $n$ sentences. The value of $n$ is as reported in our prior work and shown in Table TABREF25.
The results confirm BIBREF11's finding that classification outperforms regression. However, the actual choice of optimal labelling scheme was different: whereas in BIBREF11 the optimal labelling was based on a labelling threshold of 0.1, our experiments show a better result when using the top 5 sentences as the target summary. The reason for this difference might be the fact that BIBREF11 used all sentences from the abstracts of the relevant PubMed articles, whereas we use only the snippets as the input to our summariser. Consequently, the number of input sentences is now much smaller. We therefore report the results of using the labelling schema of top 5 snippets in all subsequent classifier-based experiments of this paper.
barchart=[fill=black!20,draw=black] errorbar=[very thin,draw=black!75] sscale=[very thin,draw=black!75]
Deep Learning Models
Based on the findings of Section SECREF3, we apply minimal changes to the deep learning regression models of BIBREF2 to convert them to classification models. In particular, we add a sigmoid activation to the final layer, and use cross-entropy as the loss function. The complete architecture is shown in Fig. FIGREF28.
The bottom section of Table TABREF26 shows the results of several variants of the neural architecture. The table includes a neural regressor (NNR) and a neural classifier (NNC). The neural classifier is trained in two set ups: “NNC top 5” uses classification labels as described in Section SECREF3, and “NNC SU4 F1” uses the regression labels, that is, the ROUGE-SU4 F1 scores of each sentence. Of interest is the fact that “NNC SU4 F1” outperforms the neural regressor. We have not explored this further and we presume that the relatively good results are due to the fact that ROUGE values range between 0 and 1, which matches the full range of probability values that can be returned by the sigmoid activation of the classifier final layer.
Table TABREF26 also shows the standard deviation across the cross-validation folds. Whereas this standard deviation is fairly large compared with the differences in results, in general the results are compatible with the top part of the table and prior work suggesting that classification-based approaches improve over regression-based approaches.
Reinforcement Learning
We also experiment with the use of reinforcement learning techniques. Again these experiments are based on BIBREF2, who uses REINFORCE to train a global policy. The policy predictor uses a simple feedforward network with a hidden layer.
The results reported by BIBREF2 used ROUGE Recall and indicated no improvement with respect to deep learning architectures. Human evaluation results are preferable over ROUGE but these were made available after the publication of the paper. When comparing the ROUGE and human evaluation results (Table TABREF29), we observe an inversion of the results. In particular, the reinforcement learning approaches (RL) of BIBREF2 receive good human evaluation results, and as a matter of fact they are the best of our runs in two of the batches. In contrast, the regression systems (NNR) fare relatively poorly. Section SECREF6 expands on the comparison between the ROUGE and human evaluation scores.
Encouraged by the results of Table TABREF29, we decided to continue with our experiments with reinforcement learning. We use the same features as in BIBREF2, namely the length (in number of sentences) of the summary generated so far, plus the $tf.idf$ vectors of the following:
Candidate sentence;
Entire input to summarise;
Summary generated so far;
Candidate sentences that are yet to be processed; and
Question.
The reward used by REINFORCE is the ROUGE value of the summary generated by the system. Since BIBREF2 observed a difference between the ROUGE values of the Python implementation of ROUGE and the original Perl version (partly because the Python implementation does not include ROUGE-SU4), we compare the performance of our system when trained with each of them. Table TABREF35 summarises some of our experiments. We ran the version trained on Python ROUGE once, and the version trained on Perl twice. The two Perl runs have different results, and one of them clearly outperforms the Python run. However, given the differences of results between the two Perl runs we advice to re-run the experiments multiple times and obtain the mean and standard deviation of the runs before concluding whether there is any statistical difference between the results. But it seems that there may be an improvement of the final evaluation results when training on the Perl ROUGE values, presumably because the final evaluation results are measured using the Perl implementation of ROUGE.
We have also tested the use of word embeddings instead of $tf.idf$ as input features to the policy model, while keeping the same neural architecture for the policy (one hidden layer using the same number of hidden nodes). In particular, we use the mean of word embeddings using 100 and 200 dimensions. These word embeddings were pre-trained using word2vec on PubMed documents provided by the organisers of BioASQ, as we did for the architectures described in previous sections. The results, not shown in the paper, indicated no major improvement, and re-runs of the experiments showed different results on different runs. Consequently, our submission to BioASQ included the original system using $tf.idf$ as input features in all batches but batch 2, as described in Section SECREF7.
Evaluation Correlation Analysis
As mentioned in Section SECREF5, there appears to be a large discrepancy between ROUGE Recall and the human evaluations. This section describes a correlation analysis between human and ROUGE evaluations using the runs of all participants to all previous BioASQ challenges that included human evaluations (Phase B, ideal answers). The human evaluation results were scraped from the BioASQ Results page, and the ROUGE results were kindly provided by the organisers. We compute the correlation of each of the ROUGE metrics (recall, precision, F1 for ROUGE-2 and ROUGE-SU4) against the average of the human scores. The correlation metrics are Pearson, Kendall, and a revised Kendall correlation explained below.
The Pearson correlation between two variables is computed as the covariance of the two variables divided by the product of their standard deviations. This correlation is a good indication of a linear relation between the two variables, but may not be very effective when there is non-linear correlation.
The Spearman rank correlation and the Kendall rank correlation are two of the most popular among metrics that aim to detect non-linear correlations. The Spearman rank correlation between two variables can be computed as the Pearson correlation between the rank values of the two variables, whereas the Kendall rank correlation measures the ordinal association between the two variables using Equation DISPLAY_FORM36.
It is useful to account for the fact that the results are from 28 independent sets (3 batches in BioASQ 1 and 5 batches each year between BioASQ 2 and BioASQ 6). We therefore also compute a revised Kendall rank correlation measure that only considers pairs of variable values within the same set. The revised metric is computed using Equation DISPLAY_FORM37, where $S$ is the list of different sets.
Table TABREF38 shows the results of all correlation metrics. Overall, ROUGE-2 and ROUGE-SU4 give similar correlation values but ROUGE-SU4 is marginally better. Among precision, recall and F1, both precision and F1 are similar, but precision gives a better correlation. Recall shows poor correlation, and virtually no correlation when using the revised Kendall measure. For reporting the evaluation of results, it will be therefore more useful to use precision or F1. However, given the small difference between precision and F1, and given that precision may favour short summaries when used as a function to optimise in a machine learning setting (e.g. using reinforcement learning), it may be best to use F1 as the metric to optimise.
Fig. FIGREF40 shows the scatterplots of ROUGE-SU4 recall, precision and F1 with respect to the average human evaluation. We observe that the relation between ROUGE and the human evaluations is not linear, and that Precision and F1 have a clear correlation.
Submitted Runs
Table TABREF41 shows the results and details of the runs submitted to BioASQ. The table uses ROUGE-SU4 Recall since this is the metric available at the time of writing this paper. However, note that, as explained in Section SECREF6, these results might differ from the final human evaluation results. Therefore we do not comment on the results, other than observing that the “first $n$” baseline produces the same results as the neural regressor. As mentioned in Section SECREF3, the labels used for the classification experiments are the 5 sentences with highest ROUGE-SU4 F1 score.
Conclusions
Macquarie University's participation in BioASQ 7 focused on the task of generating the ideal answers. The runs use query-based extractive techniques and we experiment with classification, regression, and reinforcement learning approaches. At the time of writing there were no human evaluation results, and based on ROUGE-F1 scores under cross-validation on the training data we observed that classification approaches outperform regression approaches. We experimented with several approaches to label the individual sentences for the classifier and observed that the optimal labelling policy for this task differed from prior work.
We also observed poor correlation between ROUGE-Recall and human evaluation metrics and suggest to use alternative automatic evaluation metrics with better correlation, such as ROUGE-Precision or ROUGE-F1. Given the nature of precision-based metrics which could bias the system towards returning short summaries, ROUGE-F1 is probably more appropriate when using at development time, for example for the reward function used by a reinforcement learning system.
Reinforcement learning gives promising results, especially in human evaluations made on the runs submitted to BioASQ 6b. This year we introduced very small changes to the runs using reinforcement learning, and will aim to explore more complex reinforcement learning strategies and more complex neural models in the policy and value estimators. | classification, regression, neural methods |
cfffc94518d64cb3c8789395707e4336676e0345 | cfffc94518d64cb3c8789395707e4336676e0345_1 | Q: What approaches without reinforcement learning have been tried?
Text: Introduction
The BioASQ Challenge includes a question answering task (Phase B, part B) where the aim is to find the “ideal answer” — that is, an answer that would normally be given by a person BIBREF0. This is in contrast with most other question answering challenges where the aim is normally to give an exact answer, usually a fact-based answer or a list. Given that the answer is based on an input that consists of a biomedical question and several relevant PubMed abstracts, the task can be seen as an instance of query-based multi-document summarisation.
As in past participation BIBREF1, BIBREF2, we wanted to test the use of deep learning and reinforcement learning approaches for extractive summarisation. In contrast with past years where the training procedure was based on a regression set up, this year we experiment with various classification set ups. The main contributions of this paper are:
We compare classification and regression approaches and show that classification produces better results than regression but the quality of the results depends on the approach followed to annotate the data labels.
We conduct correlation analysis between various ROUGE evaluation metrics and the human evaluations conducted at BioASQ and show that Precision and F1 correlate better than Recall.
Section SECREF2 briefly introduces some related work for context. Section SECREF3 describes our classification and regression experiments. Section SECREF4 details our experiments using deep learning architectures. Section SECREF5 explains the reinforcement learning approaches. Section SECREF6 shows the results of our correlation analysis between ROUGE scores and human annotations. Section SECREF7 lists the specific runs submitted at BioASQ 7b. Finally, Section SECREF8 concludes the paper.
Related Work
The BioASQ challenge has organised annual challenges on biomedical semantic indexing and question answering since 2013 BIBREF0. Every year there has been a task about semantic indexing (task a) and another about question answering (task b), and occasionally there have been additional tasks. The tasks defined for 2019 are:
Large Scale Online Biomedical Semantic Indexing.
Biomedical Semantic QA involving Information Retrieval (IR), Question Answering (QA), and Summarisation.
Medical Semantic Indexing in Spanish.
BioASQ Task 7b consists of two phases. Phase A provides a biomedical question as an input, and participants are expected to find relevant concepts from designated terminologies and ontologies, relevant articles from PubMed, relevant snippets from the relevant articles, and relevant RDF triples from designated ontologies. Phase B provides a biomedical question and a list of relevant articles and snippets, and participant systems are expected to return the exact answers and the ideal answers. The training data is composed of the test data from all previous years, and amounts to 2,747 samples. There has been considerable research on the use of machine learning approaches for tasks related to text summarisation, especially on single-document summarisation. Abstractive approaches normally use an encoder-decoder architecture and variants of this architecture incorporate attention BIBREF3 and pointer-generator BIBREF4. Recent approaches leveraged the use of pre-trained models BIBREF5. Recent extractive approaches to summarisation incorporate recurrent neural networks that model sequences of sentence extractions BIBREF6 and may incorporate an abstractive component and reinforcement learning during the training stage BIBREF7. But relatively few approaches have been proposed for query-based multi-document summarisation. Table TABREF8 summarises the approaches presented in the proceedings of the 2018 BioASQ challenge.
Classification vs. Regression Experiments
Our past participation in BioASQ BIBREF1, BIBREF2 and this paper focus on extractive approaches to summarisation. Our decision to focus on extractive approaches is based on the observation that a relatively large number of sentences from the input snippets has very high ROUGE scores, thus suggesting that human annotators had a general tendency to copy text from the input to generate the target summaries BIBREF1. Our past participating systems used regression approaches using the following framework:
Train the regressor to predict the ROUGE-SU4 F1 score of the input sentence.
Produce a summary by selecting the top $n$ input sentences.
A novelty in the current participation is the introduction of classification approaches using the following framework.
Train the classifier to predict the target label (“summary” or “not summary”) of the input sentence.
Produce a summary by selecting all sentences predicted as “summary”.
If the total number of sentences selected is less than $n$, select $n$ sentences with higher probability of label “summary”.
Introducing a classifier makes labelling the training data not trivial, since the target summaries are human-generated and they do not have a perfect mapping to the input sentences. In addition, some samples have multiple reference summaries. BIBREF11 showed that different data labelling approaches influence the quality of the final summary, and some labelling approaches may lead to better results than using regression. In this paper we experiment with the following labelling approaches:
: Label as “summary” all sentences from the input text that have a ROUGE score above a threshold $t$.
: Label as “summary” the $m$ input text sentences with highest ROUGE score.
As in BIBREF11, The ROUGE score of an input sentence was the ROUGE-SU4 F1 score of the sentence against the set of reference summaries.
We conducted cross-validation experiments using various values of $t$ and $m$. Table TABREF26 shows the results for the best values of $t$ and $m$ obtained. The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively. To enable a fair comparison we used the same input features in all systems. These input features combine information from the question and the input sentence and are shown in Fig. FIGREF16. The features are based on BIBREF12, and are the same as in BIBREF1, plus the addition of the position of the input snippet. The best SVC and SVR parameters were determined by grid search.
Preliminary experiments showed a relatively high number of cases where the classifier did not classify any of the input sentences as “summary”. To solve this problem, and as mentioned above, the summariser used in Table TABREF26 introduces a backoff step that extracts the $n$ sentences with highest predicted values when the summary has less than $n$ sentences. The value of $n$ is as reported in our prior work and shown in Table TABREF25.
The results confirm BIBREF11's finding that classification outperforms regression. However, the actual choice of optimal labelling scheme was different: whereas in BIBREF11 the optimal labelling was based on a labelling threshold of 0.1, our experiments show a better result when using the top 5 sentences as the target summary. The reason for this difference might be the fact that BIBREF11 used all sentences from the abstracts of the relevant PubMed articles, whereas we use only the snippets as the input to our summariser. Consequently, the number of input sentences is now much smaller. We therefore report the results of using the labelling schema of top 5 snippets in all subsequent classifier-based experiments of this paper.
barchart=[fill=black!20,draw=black] errorbar=[very thin,draw=black!75] sscale=[very thin,draw=black!75]
Deep Learning Models
Based on the findings of Section SECREF3, we apply minimal changes to the deep learning regression models of BIBREF2 to convert them to classification models. In particular, we add a sigmoid activation to the final layer, and use cross-entropy as the loss function. The complete architecture is shown in Fig. FIGREF28.
The bottom section of Table TABREF26 shows the results of several variants of the neural architecture. The table includes a neural regressor (NNR) and a neural classifier (NNC). The neural classifier is trained in two set ups: “NNC top 5” uses classification labels as described in Section SECREF3, and “NNC SU4 F1” uses the regression labels, that is, the ROUGE-SU4 F1 scores of each sentence. Of interest is the fact that “NNC SU4 F1” outperforms the neural regressor. We have not explored this further and we presume that the relatively good results are due to the fact that ROUGE values range between 0 and 1, which matches the full range of probability values that can be returned by the sigmoid activation of the classifier final layer.
Table TABREF26 also shows the standard deviation across the cross-validation folds. Whereas this standard deviation is fairly large compared with the differences in results, in general the results are compatible with the top part of the table and prior work suggesting that classification-based approaches improve over regression-based approaches.
Reinforcement Learning
We also experiment with the use of reinforcement learning techniques. Again these experiments are based on BIBREF2, who uses REINFORCE to train a global policy. The policy predictor uses a simple feedforward network with a hidden layer.
The results reported by BIBREF2 used ROUGE Recall and indicated no improvement with respect to deep learning architectures. Human evaluation results are preferable over ROUGE but these were made available after the publication of the paper. When comparing the ROUGE and human evaluation results (Table TABREF29), we observe an inversion of the results. In particular, the reinforcement learning approaches (RL) of BIBREF2 receive good human evaluation results, and as a matter of fact they are the best of our runs in two of the batches. In contrast, the regression systems (NNR) fare relatively poorly. Section SECREF6 expands on the comparison between the ROUGE and human evaluation scores.
Encouraged by the results of Table TABREF29, we decided to continue with our experiments with reinforcement learning. We use the same features as in BIBREF2, namely the length (in number of sentences) of the summary generated so far, plus the $tf.idf$ vectors of the following:
Candidate sentence;
Entire input to summarise;
Summary generated so far;
Candidate sentences that are yet to be processed; and
Question.
The reward used by REINFORCE is the ROUGE value of the summary generated by the system. Since BIBREF2 observed a difference between the ROUGE values of the Python implementation of ROUGE and the original Perl version (partly because the Python implementation does not include ROUGE-SU4), we compare the performance of our system when trained with each of them. Table TABREF35 summarises some of our experiments. We ran the version trained on Python ROUGE once, and the version trained on Perl twice. The two Perl runs have different results, and one of them clearly outperforms the Python run. However, given the differences of results between the two Perl runs we advice to re-run the experiments multiple times and obtain the mean and standard deviation of the runs before concluding whether there is any statistical difference between the results. But it seems that there may be an improvement of the final evaluation results when training on the Perl ROUGE values, presumably because the final evaluation results are measured using the Perl implementation of ROUGE.
We have also tested the use of word embeddings instead of $tf.idf$ as input features to the policy model, while keeping the same neural architecture for the policy (one hidden layer using the same number of hidden nodes). In particular, we use the mean of word embeddings using 100 and 200 dimensions. These word embeddings were pre-trained using word2vec on PubMed documents provided by the organisers of BioASQ, as we did for the architectures described in previous sections. The results, not shown in the paper, indicated no major improvement, and re-runs of the experiments showed different results on different runs. Consequently, our submission to BioASQ included the original system using $tf.idf$ as input features in all batches but batch 2, as described in Section SECREF7.
Evaluation Correlation Analysis
As mentioned in Section SECREF5, there appears to be a large discrepancy between ROUGE Recall and the human evaluations. This section describes a correlation analysis between human and ROUGE evaluations using the runs of all participants to all previous BioASQ challenges that included human evaluations (Phase B, ideal answers). The human evaluation results were scraped from the BioASQ Results page, and the ROUGE results were kindly provided by the organisers. We compute the correlation of each of the ROUGE metrics (recall, precision, F1 for ROUGE-2 and ROUGE-SU4) against the average of the human scores. The correlation metrics are Pearson, Kendall, and a revised Kendall correlation explained below.
The Pearson correlation between two variables is computed as the covariance of the two variables divided by the product of their standard deviations. This correlation is a good indication of a linear relation between the two variables, but may not be very effective when there is non-linear correlation.
The Spearman rank correlation and the Kendall rank correlation are two of the most popular among metrics that aim to detect non-linear correlations. The Spearman rank correlation between two variables can be computed as the Pearson correlation between the rank values of the two variables, whereas the Kendall rank correlation measures the ordinal association between the two variables using Equation DISPLAY_FORM36.
It is useful to account for the fact that the results are from 28 independent sets (3 batches in BioASQ 1 and 5 batches each year between BioASQ 2 and BioASQ 6). We therefore also compute a revised Kendall rank correlation measure that only considers pairs of variable values within the same set. The revised metric is computed using Equation DISPLAY_FORM37, where $S$ is the list of different sets.
Table TABREF38 shows the results of all correlation metrics. Overall, ROUGE-2 and ROUGE-SU4 give similar correlation values but ROUGE-SU4 is marginally better. Among precision, recall and F1, both precision and F1 are similar, but precision gives a better correlation. Recall shows poor correlation, and virtually no correlation when using the revised Kendall measure. For reporting the evaluation of results, it will be therefore more useful to use precision or F1. However, given the small difference between precision and F1, and given that precision may favour short summaries when used as a function to optimise in a machine learning setting (e.g. using reinforcement learning), it may be best to use F1 as the metric to optimise.
Fig. FIGREF40 shows the scatterplots of ROUGE-SU4 recall, precision and F1 with respect to the average human evaluation. We observe that the relation between ROUGE and the human evaluations is not linear, and that Precision and F1 have a clear correlation.
Submitted Runs
Table TABREF41 shows the results and details of the runs submitted to BioASQ. The table uses ROUGE-SU4 Recall since this is the metric available at the time of writing this paper. However, note that, as explained in Section SECREF6, these results might differ from the final human evaluation results. Therefore we do not comment on the results, other than observing that the “first $n$” baseline produces the same results as the neural regressor. As mentioned in Section SECREF3, the labels used for the classification experiments are the 5 sentences with highest ROUGE-SU4 F1 score.
Conclusions
Macquarie University's participation in BioASQ 7 focused on the task of generating the ideal answers. The runs use query-based extractive techniques and we experiment with classification, regression, and reinforcement learning approaches. At the time of writing there were no human evaluation results, and based on ROUGE-F1 scores under cross-validation on the training data we observed that classification approaches outperform regression approaches. We experimented with several approaches to label the individual sentences for the classifier and observed that the optimal labelling policy for this task differed from prior work.
We also observed poor correlation between ROUGE-Recall and human evaluation metrics and suggest to use alternative automatic evaluation metrics with better correlation, such as ROUGE-Precision or ROUGE-F1. Given the nature of precision-based metrics which could bias the system towards returning short summaries, ROUGE-F1 is probably more appropriate when using at development time, for example for the reward function used by a reinforcement learning system.
Reinforcement learning gives promising results, especially in human evaluations made on the runs submitted to BioASQ 6b. This year we introduced very small changes to the runs using reinforcement learning, and will aim to explore more complex reinforcement learning strategies and more complex neural models in the policy and value estimators. | Support Vector Regression (SVR) and Support Vector Classification (SVC), deep learning regression models of BIBREF2 to convert them to classification models |
f60629c01f99de3f68365833ee115b95a3388699 | f60629c01f99de3f68365833ee115b95a3388699_0 | Q: What classification approaches were experimented for this task?
Text: Introduction
The BioASQ Challenge includes a question answering task (Phase B, part B) where the aim is to find the “ideal answer” — that is, an answer that would normally be given by a person BIBREF0. This is in contrast with most other question answering challenges where the aim is normally to give an exact answer, usually a fact-based answer or a list. Given that the answer is based on an input that consists of a biomedical question and several relevant PubMed abstracts, the task can be seen as an instance of query-based multi-document summarisation.
As in past participation BIBREF1, BIBREF2, we wanted to test the use of deep learning and reinforcement learning approaches for extractive summarisation. In contrast with past years where the training procedure was based on a regression set up, this year we experiment with various classification set ups. The main contributions of this paper are:
We compare classification and regression approaches and show that classification produces better results than regression but the quality of the results depends on the approach followed to annotate the data labels.
We conduct correlation analysis between various ROUGE evaluation metrics and the human evaluations conducted at BioASQ and show that Precision and F1 correlate better than Recall.
Section SECREF2 briefly introduces some related work for context. Section SECREF3 describes our classification and regression experiments. Section SECREF4 details our experiments using deep learning architectures. Section SECREF5 explains the reinforcement learning approaches. Section SECREF6 shows the results of our correlation analysis between ROUGE scores and human annotations. Section SECREF7 lists the specific runs submitted at BioASQ 7b. Finally, Section SECREF8 concludes the paper.
Related Work
The BioASQ challenge has organised annual challenges on biomedical semantic indexing and question answering since 2013 BIBREF0. Every year there has been a task about semantic indexing (task a) and another about question answering (task b), and occasionally there have been additional tasks. The tasks defined for 2019 are:
Large Scale Online Biomedical Semantic Indexing.
Biomedical Semantic QA involving Information Retrieval (IR), Question Answering (QA), and Summarisation.
Medical Semantic Indexing in Spanish.
BioASQ Task 7b consists of two phases. Phase A provides a biomedical question as an input, and participants are expected to find relevant concepts from designated terminologies and ontologies, relevant articles from PubMed, relevant snippets from the relevant articles, and relevant RDF triples from designated ontologies. Phase B provides a biomedical question and a list of relevant articles and snippets, and participant systems are expected to return the exact answers and the ideal answers. The training data is composed of the test data from all previous years, and amounts to 2,747 samples. There has been considerable research on the use of machine learning approaches for tasks related to text summarisation, especially on single-document summarisation. Abstractive approaches normally use an encoder-decoder architecture and variants of this architecture incorporate attention BIBREF3 and pointer-generator BIBREF4. Recent approaches leveraged the use of pre-trained models BIBREF5. Recent extractive approaches to summarisation incorporate recurrent neural networks that model sequences of sentence extractions BIBREF6 and may incorporate an abstractive component and reinforcement learning during the training stage BIBREF7. But relatively few approaches have been proposed for query-based multi-document summarisation. Table TABREF8 summarises the approaches presented in the proceedings of the 2018 BioASQ challenge.
Classification vs. Regression Experiments
Our past participation in BioASQ BIBREF1, BIBREF2 and this paper focus on extractive approaches to summarisation. Our decision to focus on extractive approaches is based on the observation that a relatively large number of sentences from the input snippets has very high ROUGE scores, thus suggesting that human annotators had a general tendency to copy text from the input to generate the target summaries BIBREF1. Our past participating systems used regression approaches using the following framework:
Train the regressor to predict the ROUGE-SU4 F1 score of the input sentence.
Produce a summary by selecting the top $n$ input sentences.
A novelty in the current participation is the introduction of classification approaches using the following framework.
Train the classifier to predict the target label (“summary” or “not summary”) of the input sentence.
Produce a summary by selecting all sentences predicted as “summary”.
If the total number of sentences selected is less than $n$, select $n$ sentences with higher probability of label “summary”.
Introducing a classifier makes labelling the training data not trivial, since the target summaries are human-generated and they do not have a perfect mapping to the input sentences. In addition, some samples have multiple reference summaries. BIBREF11 showed that different data labelling approaches influence the quality of the final summary, and some labelling approaches may lead to better results than using regression. In this paper we experiment with the following labelling approaches:
: Label as “summary” all sentences from the input text that have a ROUGE score above a threshold $t$.
: Label as “summary” the $m$ input text sentences with highest ROUGE score.
As in BIBREF11, The ROUGE score of an input sentence was the ROUGE-SU4 F1 score of the sentence against the set of reference summaries.
We conducted cross-validation experiments using various values of $t$ and $m$. Table TABREF26 shows the results for the best values of $t$ and $m$ obtained. The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively. To enable a fair comparison we used the same input features in all systems. These input features combine information from the question and the input sentence and are shown in Fig. FIGREF16. The features are based on BIBREF12, and are the same as in BIBREF1, plus the addition of the position of the input snippet. The best SVC and SVR parameters were determined by grid search.
Preliminary experiments showed a relatively high number of cases where the classifier did not classify any of the input sentences as “summary”. To solve this problem, and as mentioned above, the summariser used in Table TABREF26 introduces a backoff step that extracts the $n$ sentences with highest predicted values when the summary has less than $n$ sentences. The value of $n$ is as reported in our prior work and shown in Table TABREF25.
The results confirm BIBREF11's finding that classification outperforms regression. However, the actual choice of optimal labelling scheme was different: whereas in BIBREF11 the optimal labelling was based on a labelling threshold of 0.1, our experiments show a better result when using the top 5 sentences as the target summary. The reason for this difference might be the fact that BIBREF11 used all sentences from the abstracts of the relevant PubMed articles, whereas we use only the snippets as the input to our summariser. Consequently, the number of input sentences is now much smaller. We therefore report the results of using the labelling schema of top 5 snippets in all subsequent classifier-based experiments of this paper.
barchart=[fill=black!20,draw=black] errorbar=[very thin,draw=black!75] sscale=[very thin,draw=black!75]
Deep Learning Models
Based on the findings of Section SECREF3, we apply minimal changes to the deep learning regression models of BIBREF2 to convert them to classification models. In particular, we add a sigmoid activation to the final layer, and use cross-entropy as the loss function. The complete architecture is shown in Fig. FIGREF28.
The bottom section of Table TABREF26 shows the results of several variants of the neural architecture. The table includes a neural regressor (NNR) and a neural classifier (NNC). The neural classifier is trained in two set ups: “NNC top 5” uses classification labels as described in Section SECREF3, and “NNC SU4 F1” uses the regression labels, that is, the ROUGE-SU4 F1 scores of each sentence. Of interest is the fact that “NNC SU4 F1” outperforms the neural regressor. We have not explored this further and we presume that the relatively good results are due to the fact that ROUGE values range between 0 and 1, which matches the full range of probability values that can be returned by the sigmoid activation of the classifier final layer.
Table TABREF26 also shows the standard deviation across the cross-validation folds. Whereas this standard deviation is fairly large compared with the differences in results, in general the results are compatible with the top part of the table and prior work suggesting that classification-based approaches improve over regression-based approaches.
Reinforcement Learning
We also experiment with the use of reinforcement learning techniques. Again these experiments are based on BIBREF2, who uses REINFORCE to train a global policy. The policy predictor uses a simple feedforward network with a hidden layer.
The results reported by BIBREF2 used ROUGE Recall and indicated no improvement with respect to deep learning architectures. Human evaluation results are preferable over ROUGE but these were made available after the publication of the paper. When comparing the ROUGE and human evaluation results (Table TABREF29), we observe an inversion of the results. In particular, the reinforcement learning approaches (RL) of BIBREF2 receive good human evaluation results, and as a matter of fact they are the best of our runs in two of the batches. In contrast, the regression systems (NNR) fare relatively poorly. Section SECREF6 expands on the comparison between the ROUGE and human evaluation scores.
Encouraged by the results of Table TABREF29, we decided to continue with our experiments with reinforcement learning. We use the same features as in BIBREF2, namely the length (in number of sentences) of the summary generated so far, plus the $tf.idf$ vectors of the following:
Candidate sentence;
Entire input to summarise;
Summary generated so far;
Candidate sentences that are yet to be processed; and
Question.
The reward used by REINFORCE is the ROUGE value of the summary generated by the system. Since BIBREF2 observed a difference between the ROUGE values of the Python implementation of ROUGE and the original Perl version (partly because the Python implementation does not include ROUGE-SU4), we compare the performance of our system when trained with each of them. Table TABREF35 summarises some of our experiments. We ran the version trained on Python ROUGE once, and the version trained on Perl twice. The two Perl runs have different results, and one of them clearly outperforms the Python run. However, given the differences of results between the two Perl runs we advice to re-run the experiments multiple times and obtain the mean and standard deviation of the runs before concluding whether there is any statistical difference between the results. But it seems that there may be an improvement of the final evaluation results when training on the Perl ROUGE values, presumably because the final evaluation results are measured using the Perl implementation of ROUGE.
We have also tested the use of word embeddings instead of $tf.idf$ as input features to the policy model, while keeping the same neural architecture for the policy (one hidden layer using the same number of hidden nodes). In particular, we use the mean of word embeddings using 100 and 200 dimensions. These word embeddings were pre-trained using word2vec on PubMed documents provided by the organisers of BioASQ, as we did for the architectures described in previous sections. The results, not shown in the paper, indicated no major improvement, and re-runs of the experiments showed different results on different runs. Consequently, our submission to BioASQ included the original system using $tf.idf$ as input features in all batches but batch 2, as described in Section SECREF7.
Evaluation Correlation Analysis
As mentioned in Section SECREF5, there appears to be a large discrepancy between ROUGE Recall and the human evaluations. This section describes a correlation analysis between human and ROUGE evaluations using the runs of all participants to all previous BioASQ challenges that included human evaluations (Phase B, ideal answers). The human evaluation results were scraped from the BioASQ Results page, and the ROUGE results were kindly provided by the organisers. We compute the correlation of each of the ROUGE metrics (recall, precision, F1 for ROUGE-2 and ROUGE-SU4) against the average of the human scores. The correlation metrics are Pearson, Kendall, and a revised Kendall correlation explained below.
The Pearson correlation between two variables is computed as the covariance of the two variables divided by the product of their standard deviations. This correlation is a good indication of a linear relation between the two variables, but may not be very effective when there is non-linear correlation.
The Spearman rank correlation and the Kendall rank correlation are two of the most popular among metrics that aim to detect non-linear correlations. The Spearman rank correlation between two variables can be computed as the Pearson correlation between the rank values of the two variables, whereas the Kendall rank correlation measures the ordinal association between the two variables using Equation DISPLAY_FORM36.
It is useful to account for the fact that the results are from 28 independent sets (3 batches in BioASQ 1 and 5 batches each year between BioASQ 2 and BioASQ 6). We therefore also compute a revised Kendall rank correlation measure that only considers pairs of variable values within the same set. The revised metric is computed using Equation DISPLAY_FORM37, where $S$ is the list of different sets.
Table TABREF38 shows the results of all correlation metrics. Overall, ROUGE-2 and ROUGE-SU4 give similar correlation values but ROUGE-SU4 is marginally better. Among precision, recall and F1, both precision and F1 are similar, but precision gives a better correlation. Recall shows poor correlation, and virtually no correlation when using the revised Kendall measure. For reporting the evaluation of results, it will be therefore more useful to use precision or F1. However, given the small difference between precision and F1, and given that precision may favour short summaries when used as a function to optimise in a machine learning setting (e.g. using reinforcement learning), it may be best to use F1 as the metric to optimise.
Fig. FIGREF40 shows the scatterplots of ROUGE-SU4 recall, precision and F1 with respect to the average human evaluation. We observe that the relation between ROUGE and the human evaluations is not linear, and that Precision and F1 have a clear correlation.
Submitted Runs
Table TABREF41 shows the results and details of the runs submitted to BioASQ. The table uses ROUGE-SU4 Recall since this is the metric available at the time of writing this paper. However, note that, as explained in Section SECREF6, these results might differ from the final human evaluation results. Therefore we do not comment on the results, other than observing that the “first $n$” baseline produces the same results as the neural regressor. As mentioned in Section SECREF3, the labels used for the classification experiments are the 5 sentences with highest ROUGE-SU4 F1 score.
Conclusions
Macquarie University's participation in BioASQ 7 focused on the task of generating the ideal answers. The runs use query-based extractive techniques and we experiment with classification, regression, and reinforcement learning approaches. At the time of writing there were no human evaluation results, and based on ROUGE-F1 scores under cross-validation on the training data we observed that classification approaches outperform regression approaches. We experimented with several approaches to label the individual sentences for the classifier and observed that the optimal labelling policy for this task differed from prior work.
We also observed poor correlation between ROUGE-Recall and human evaluation metrics and suggest to use alternative automatic evaluation metrics with better correlation, such as ROUGE-Precision or ROUGE-F1. Given the nature of precision-based metrics which could bias the system towards returning short summaries, ROUGE-F1 is probably more appropriate when using at development time, for example for the reward function used by a reinforcement learning system.
Reinforcement learning gives promising results, especially in human evaluations made on the runs submitted to BioASQ 6b. This year we introduced very small changes to the runs using reinforcement learning, and will aim to explore more complex reinforcement learning strategies and more complex neural models in the policy and value estimators. | NNC SU4 F1, NNC top 5, Support Vector Classification (SVC) |
a7cb4f8e29fd2f3d1787df64cd981a6318b65896 | a7cb4f8e29fd2f3d1787df64cd981a6318b65896_0 | Q: Did classification models perform better than previous regression one?
Text: Introduction
The BioASQ Challenge includes a question answering task (Phase B, part B) where the aim is to find the “ideal answer” — that is, an answer that would normally be given by a person BIBREF0. This is in contrast with most other question answering challenges where the aim is normally to give an exact answer, usually a fact-based answer or a list. Given that the answer is based on an input that consists of a biomedical question and several relevant PubMed abstracts, the task can be seen as an instance of query-based multi-document summarisation.
As in past participation BIBREF1, BIBREF2, we wanted to test the use of deep learning and reinforcement learning approaches for extractive summarisation. In contrast with past years where the training procedure was based on a regression set up, this year we experiment with various classification set ups. The main contributions of this paper are:
We compare classification and regression approaches and show that classification produces better results than regression but the quality of the results depends on the approach followed to annotate the data labels.
We conduct correlation analysis between various ROUGE evaluation metrics and the human evaluations conducted at BioASQ and show that Precision and F1 correlate better than Recall.
Section SECREF2 briefly introduces some related work for context. Section SECREF3 describes our classification and regression experiments. Section SECREF4 details our experiments using deep learning architectures. Section SECREF5 explains the reinforcement learning approaches. Section SECREF6 shows the results of our correlation analysis between ROUGE scores and human annotations. Section SECREF7 lists the specific runs submitted at BioASQ 7b. Finally, Section SECREF8 concludes the paper.
Related Work
The BioASQ challenge has organised annual challenges on biomedical semantic indexing and question answering since 2013 BIBREF0. Every year there has been a task about semantic indexing (task a) and another about question answering (task b), and occasionally there have been additional tasks. The tasks defined for 2019 are:
Large Scale Online Biomedical Semantic Indexing.
Biomedical Semantic QA involving Information Retrieval (IR), Question Answering (QA), and Summarisation.
Medical Semantic Indexing in Spanish.
BioASQ Task 7b consists of two phases. Phase A provides a biomedical question as an input, and participants are expected to find relevant concepts from designated terminologies and ontologies, relevant articles from PubMed, relevant snippets from the relevant articles, and relevant RDF triples from designated ontologies. Phase B provides a biomedical question and a list of relevant articles and snippets, and participant systems are expected to return the exact answers and the ideal answers. The training data is composed of the test data from all previous years, and amounts to 2,747 samples. There has been considerable research on the use of machine learning approaches for tasks related to text summarisation, especially on single-document summarisation. Abstractive approaches normally use an encoder-decoder architecture and variants of this architecture incorporate attention BIBREF3 and pointer-generator BIBREF4. Recent approaches leveraged the use of pre-trained models BIBREF5. Recent extractive approaches to summarisation incorporate recurrent neural networks that model sequences of sentence extractions BIBREF6 and may incorporate an abstractive component and reinforcement learning during the training stage BIBREF7. But relatively few approaches have been proposed for query-based multi-document summarisation. Table TABREF8 summarises the approaches presented in the proceedings of the 2018 BioASQ challenge.
Classification vs. Regression Experiments
Our past participation in BioASQ BIBREF1, BIBREF2 and this paper focus on extractive approaches to summarisation. Our decision to focus on extractive approaches is based on the observation that a relatively large number of sentences from the input snippets has very high ROUGE scores, thus suggesting that human annotators had a general tendency to copy text from the input to generate the target summaries BIBREF1. Our past participating systems used regression approaches using the following framework:
Train the regressor to predict the ROUGE-SU4 F1 score of the input sentence.
Produce a summary by selecting the top $n$ input sentences.
A novelty in the current participation is the introduction of classification approaches using the following framework.
Train the classifier to predict the target label (“summary” or “not summary”) of the input sentence.
Produce a summary by selecting all sentences predicted as “summary”.
If the total number of sentences selected is less than $n$, select $n$ sentences with higher probability of label “summary”.
Introducing a classifier makes labelling the training data not trivial, since the target summaries are human-generated and they do not have a perfect mapping to the input sentences. In addition, some samples have multiple reference summaries. BIBREF11 showed that different data labelling approaches influence the quality of the final summary, and some labelling approaches may lead to better results than using regression. In this paper we experiment with the following labelling approaches:
: Label as “summary” all sentences from the input text that have a ROUGE score above a threshold $t$.
: Label as “summary” the $m$ input text sentences with highest ROUGE score.
As in BIBREF11, The ROUGE score of an input sentence was the ROUGE-SU4 F1 score of the sentence against the set of reference summaries.
We conducted cross-validation experiments using various values of $t$ and $m$. Table TABREF26 shows the results for the best values of $t$ and $m$ obtained. The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively. To enable a fair comparison we used the same input features in all systems. These input features combine information from the question and the input sentence and are shown in Fig. FIGREF16. The features are based on BIBREF12, and are the same as in BIBREF1, plus the addition of the position of the input snippet. The best SVC and SVR parameters were determined by grid search.
Preliminary experiments showed a relatively high number of cases where the classifier did not classify any of the input sentences as “summary”. To solve this problem, and as mentioned above, the summariser used in Table TABREF26 introduces a backoff step that extracts the $n$ sentences with highest predicted values when the summary has less than $n$ sentences. The value of $n$ is as reported in our prior work and shown in Table TABREF25.
The results confirm BIBREF11's finding that classification outperforms regression. However, the actual choice of optimal labelling scheme was different: whereas in BIBREF11 the optimal labelling was based on a labelling threshold of 0.1, our experiments show a better result when using the top 5 sentences as the target summary. The reason for this difference might be the fact that BIBREF11 used all sentences from the abstracts of the relevant PubMed articles, whereas we use only the snippets as the input to our summariser. Consequently, the number of input sentences is now much smaller. We therefore report the results of using the labelling schema of top 5 snippets in all subsequent classifier-based experiments of this paper.
barchart=[fill=black!20,draw=black] errorbar=[very thin,draw=black!75] sscale=[very thin,draw=black!75]
Deep Learning Models
Based on the findings of Section SECREF3, we apply minimal changes to the deep learning regression models of BIBREF2 to convert them to classification models. In particular, we add a sigmoid activation to the final layer, and use cross-entropy as the loss function. The complete architecture is shown in Fig. FIGREF28.
The bottom section of Table TABREF26 shows the results of several variants of the neural architecture. The table includes a neural regressor (NNR) and a neural classifier (NNC). The neural classifier is trained in two set ups: “NNC top 5” uses classification labels as described in Section SECREF3, and “NNC SU4 F1” uses the regression labels, that is, the ROUGE-SU4 F1 scores of each sentence. Of interest is the fact that “NNC SU4 F1” outperforms the neural regressor. We have not explored this further and we presume that the relatively good results are due to the fact that ROUGE values range between 0 and 1, which matches the full range of probability values that can be returned by the sigmoid activation of the classifier final layer.
Table TABREF26 also shows the standard deviation across the cross-validation folds. Whereas this standard deviation is fairly large compared with the differences in results, in general the results are compatible with the top part of the table and prior work suggesting that classification-based approaches improve over regression-based approaches.
Reinforcement Learning
We also experiment with the use of reinforcement learning techniques. Again these experiments are based on BIBREF2, who uses REINFORCE to train a global policy. The policy predictor uses a simple feedforward network with a hidden layer.
The results reported by BIBREF2 used ROUGE Recall and indicated no improvement with respect to deep learning architectures. Human evaluation results are preferable over ROUGE but these were made available after the publication of the paper. When comparing the ROUGE and human evaluation results (Table TABREF29), we observe an inversion of the results. In particular, the reinforcement learning approaches (RL) of BIBREF2 receive good human evaluation results, and as a matter of fact they are the best of our runs in two of the batches. In contrast, the regression systems (NNR) fare relatively poorly. Section SECREF6 expands on the comparison between the ROUGE and human evaluation scores.
Encouraged by the results of Table TABREF29, we decided to continue with our experiments with reinforcement learning. We use the same features as in BIBREF2, namely the length (in number of sentences) of the summary generated so far, plus the $tf.idf$ vectors of the following:
Candidate sentence;
Entire input to summarise;
Summary generated so far;
Candidate sentences that are yet to be processed; and
Question.
The reward used by REINFORCE is the ROUGE value of the summary generated by the system. Since BIBREF2 observed a difference between the ROUGE values of the Python implementation of ROUGE and the original Perl version (partly because the Python implementation does not include ROUGE-SU4), we compare the performance of our system when trained with each of them. Table TABREF35 summarises some of our experiments. We ran the version trained on Python ROUGE once, and the version trained on Perl twice. The two Perl runs have different results, and one of them clearly outperforms the Python run. However, given the differences of results between the two Perl runs we advice to re-run the experiments multiple times and obtain the mean and standard deviation of the runs before concluding whether there is any statistical difference between the results. But it seems that there may be an improvement of the final evaluation results when training on the Perl ROUGE values, presumably because the final evaluation results are measured using the Perl implementation of ROUGE.
We have also tested the use of word embeddings instead of $tf.idf$ as input features to the policy model, while keeping the same neural architecture for the policy (one hidden layer using the same number of hidden nodes). In particular, we use the mean of word embeddings using 100 and 200 dimensions. These word embeddings were pre-trained using word2vec on PubMed documents provided by the organisers of BioASQ, as we did for the architectures described in previous sections. The results, not shown in the paper, indicated no major improvement, and re-runs of the experiments showed different results on different runs. Consequently, our submission to BioASQ included the original system using $tf.idf$ as input features in all batches but batch 2, as described in Section SECREF7.
Evaluation Correlation Analysis
As mentioned in Section SECREF5, there appears to be a large discrepancy between ROUGE Recall and the human evaluations. This section describes a correlation analysis between human and ROUGE evaluations using the runs of all participants to all previous BioASQ challenges that included human evaluations (Phase B, ideal answers). The human evaluation results were scraped from the BioASQ Results page, and the ROUGE results were kindly provided by the organisers. We compute the correlation of each of the ROUGE metrics (recall, precision, F1 for ROUGE-2 and ROUGE-SU4) against the average of the human scores. The correlation metrics are Pearson, Kendall, and a revised Kendall correlation explained below.
The Pearson correlation between two variables is computed as the covariance of the two variables divided by the product of their standard deviations. This correlation is a good indication of a linear relation between the two variables, but may not be very effective when there is non-linear correlation.
The Spearman rank correlation and the Kendall rank correlation are two of the most popular among metrics that aim to detect non-linear correlations. The Spearman rank correlation between two variables can be computed as the Pearson correlation between the rank values of the two variables, whereas the Kendall rank correlation measures the ordinal association between the two variables using Equation DISPLAY_FORM36.
It is useful to account for the fact that the results are from 28 independent sets (3 batches in BioASQ 1 and 5 batches each year between BioASQ 2 and BioASQ 6). We therefore also compute a revised Kendall rank correlation measure that only considers pairs of variable values within the same set. The revised metric is computed using Equation DISPLAY_FORM37, where $S$ is the list of different sets.
Table TABREF38 shows the results of all correlation metrics. Overall, ROUGE-2 and ROUGE-SU4 give similar correlation values but ROUGE-SU4 is marginally better. Among precision, recall and F1, both precision and F1 are similar, but precision gives a better correlation. Recall shows poor correlation, and virtually no correlation when using the revised Kendall measure. For reporting the evaluation of results, it will be therefore more useful to use precision or F1. However, given the small difference between precision and F1, and given that precision may favour short summaries when used as a function to optimise in a machine learning setting (e.g. using reinforcement learning), it may be best to use F1 as the metric to optimise.
Fig. FIGREF40 shows the scatterplots of ROUGE-SU4 recall, precision and F1 with respect to the average human evaluation. We observe that the relation between ROUGE and the human evaluations is not linear, and that Precision and F1 have a clear correlation.
Submitted Runs
Table TABREF41 shows the results and details of the runs submitted to BioASQ. The table uses ROUGE-SU4 Recall since this is the metric available at the time of writing this paper. However, note that, as explained in Section SECREF6, these results might differ from the final human evaluation results. Therefore we do not comment on the results, other than observing that the “first $n$” baseline produces the same results as the neural regressor. As mentioned in Section SECREF3, the labels used for the classification experiments are the 5 sentences with highest ROUGE-SU4 F1 score.
Conclusions
Macquarie University's participation in BioASQ 7 focused on the task of generating the ideal answers. The runs use query-based extractive techniques and we experiment with classification, regression, and reinforcement learning approaches. At the time of writing there were no human evaluation results, and based on ROUGE-F1 scores under cross-validation on the training data we observed that classification approaches outperform regression approaches. We experimented with several approaches to label the individual sentences for the classifier and observed that the optimal labelling policy for this task differed from prior work.
We also observed poor correlation between ROUGE-Recall and human evaluation metrics and suggest to use alternative automatic evaluation metrics with better correlation, such as ROUGE-Precision or ROUGE-F1. Given the nature of precision-based metrics which could bias the system towards returning short summaries, ROUGE-F1 is probably more appropriate when using at development time, for example for the reward function used by a reinforcement learning system.
Reinforcement learning gives promising results, especially in human evaluations made on the runs submitted to BioASQ 6b. This year we introduced very small changes to the runs using reinforcement learning, and will aim to explore more complex reinforcement learning strategies and more complex neural models in the policy and value estimators. | Yes |
642c4704a71fd01b922a0ef003f234dcc7b223cd | 642c4704a71fd01b922a0ef003f234dcc7b223cd_0 | Q: What are the main sources of recall errors in the mapping?
Text: Introduction
The two largest standardized, cross-lingual datasets for morphological annotation are provided by the Universal Dependencies BIBREF1 and Universal Morphology BIBREF2 , BIBREF3 projects. Each project's data are annotated according to its own cross-lingual schema, prescribing how features like gender or case should be marked. The schemata capture largely similar information, so one may want to leverage both UD's token-level treebanks and UniMorph's type-level lookup tables and unify the two resources. This would permit a leveraging of both the token-level UD treebanks and the type-level UniMorph tables of paradigms. Unfortunately, neither resource perfectly realizes its schema. On a dataset-by-dataset basis, they incorporate annotator errors, omissions, and human decisions when the schemata are underspecified; one such example is in fig:disagreement.
A dataset-by-dataset problem demands a dataset-by-dataset solution; our task is not to translate a schema, but to translate a resource. Starting from the idealized schema, we create a rule-based tool for converting UD-schema annotations to UniMorph annotations, incorporating language-specific post-edits that both correct infelicities and also increase harmony between the datasets themselves (rather than the schemata). We apply this conversion to the 31 languages with both UD and UniMorph data, and we report our method's recall, showing an improvement over the strategy which just maps corresponding schematic features to each other. Further, we show similar downstream performance for each annotation scheme in the task of morphological tagging.
This tool enables a synergistic use of UniMorph and Universal Dependencies, as well as teasing out the annotation discrepancies within and across projects. When one dataset disobeys its schema or disagrees with a related language, the flaws may not be noticed except by such a methodological dive into the resources. When the maintainers of the resources ameliorate these flaws, the resources move closer to the goal of a universal, cross-lingual inventory of features for morphological annotation.
The contributions of this work are:
Background: Morphological Inflection
Morphological inflection is the act of altering the base form of a word (the lemma, represented in fixed-width type) to encode morphosyntactic features. As an example from English, prove takes on the form proved to indicate that the action occurred in the past. (We will represent all surface forms in quotation marks.) The process occurs in the majority of the world's widely-spoken languages, typically through meaningful affixes. The breadth of forms created by inflection creates a challenge of data sparsity for natural language processing: The likelihood of observing a particular word form diminishes.
A classic result in psycholinguistics BIBREF4 shows that inflectional morphology is a fully productive process. Indeed, it cannot be that humans simply have the equivalent of a lookup table, where they store the inflected forms for retrieval as the syntactic context requires. Instead, there needs to be a mental process that can generate properly inflected words on demand. BIBREF4 showed this insightfully through the wug-test, an experiment where she forced participants to correctly inflect out-of-vocabulary lemmata, such as the novel noun wug.
Certain features of a word do not vary depending on its context: In German or Spanish where nouns are gendered, the word for onion will always be grammatically feminine. Thus, to prepare for later discussion, we divide the morphological features of a word into two categories: the modifiable inflectional features and the fixed lexical features.
A part of speech (POS) is a coarse syntactic category (like verb) that begets a word's particular menu of lexical and inflectional features. In English, verbs express no gender, and adjectives do not reflect person or number. The part of speech dictates a set of inflectional slots to be filled by the surface forms. Completing these slots for a given lemma and part of speech gives a paradigm: a mapping from slots to surface forms. Regular English verbs have five slots in their paradigm BIBREF5 , which we illustrate for the verb prove, using simple labels for the forms in tab:ptb.
A morphosyntactic schema prescribes how language can be annotated—giving stricter categories than our simple labels for prove—and can vary in the level of detail provided. Part of speech tags are an example of a very coarse schema, ignoring details of person, gender, and number. A slightly finer-grained schema for English is the Penn Treebank tagset BIBREF6 , which includes signals for English morphology. For instance, its VBZ tag pertains to the specially inflected 3rd-person singular, present-tense verb form (e.g. proves in tab:ptb).
If the tag in a schema is detailed enough that it exactly specifies a slot in a paradigm, it is called a morphosyntactic description (MSD). These descriptions require varying amounts of detail: While the English verbal paradigm is small enough to fit on a page, the verbal paradigm of the Northeast Caucasian language Archi can have over 1500000 slots BIBREF7 .
Two Schemata, Two Philosophies
Unlike the Penn Treebank tags, the UD and UniMorph schemata are cross-lingual and include a fuller lexicon of attribute-value pairs, such as Person: 1. Each was built according to a different set of principles. UD's schema is constructed bottom-up, adapting to include new features when they're identified in languages. UniMorph, conversely, is top-down: A cross-lingual survey of the literature of morphological phenomena guided its design. UniMorph aims to be linguistically complete, containing all known morphosyntactic attributes. Both schemata share one long-term goal: a total inventory for annotating the possible morphosyntactic features of a word.
Universal Dependencies
The Universal Dependencies morphological schema comprises part of speech and 23 additional attributes (also called features in UD) annotating meaning or syntax, as well as language-specific attributes. In order to ensure consistent annotation, attributes are included into the general UD schema if they occur in several corpora. Language-specific attributes are used when only one corpus annotates for a specific feature.
The UD schema seeks to balance language-specific and cross-lingual concerns. It annotates for both inflectional features such as case and lexical features such as gender. Additionally, the UD schema annotates for features which can be interpreted as derivational in some languages. For example, the Czech UD guidance uses a Coll value for the Number feature to denote mass nouns (for example, "lidstvo" "humankind" from the root "lid" "people").
UD represents a confederation of datasets BIBREF8 annotated with dependency relationships (which are not the focus of this work) and morphosyntactic descriptions. Each dataset is an annotated treebank, making it a resource of token-level annotations. The schema is guided by these treebanks, with feature names chosen for relevance to native speakers. (In sec:unimorph, we will contrast this with UniMorph's treatment of morphosyntactic categories.) The UD datasets have been used in the CoNLL shared tasks BIBREF9 .
UniMorph
In the Universal Morphological Feature Schema BIBREF10 , there are at least 212 values, spread across 23 attributes. It identifies some attributes that UD excludes like information structure and deixis, as well as providing more values for certain attributes, like 23 different noun classes endemic to Bantu languages. As it is a schema for marking morphology, its part of speech attribute does not have POS values for punctuation, symbols, or miscellany (Punct, Sym, and X in Universal Dependencies).
Like the UD schema, the decomposition of a word into its lemma and MSD is directly comparable across languages. Its features are informed by a distinction between universal categories, which are widespread and psychologically real to speakers; and comparative concepts, only used by linguistic typologists to compare languages BIBREF11 . Additionally, it strives for identity of meaning across languages, not simply similarity of terminology. As a prime example, it does not regularly label a dative case for nouns, for reasons explained in depth by BIBREF11 .
The UniMorph resources for a language contain complete paradigms extracted from Wiktionary BIBREF12 , BIBREF13 . Word types are annotated to form a database, mapping a lemma–tag pair to a surface form. The schema is explained in detail in BIBREF10 . It has been used in the SIGMORPHON shared task BIBREF14 and the CoNLL–SIGMORPHON shared tasks BIBREF15 , BIBREF16 . Several components of the UniMorph schema have been adopted by UD.
Similarities in the annotation
While the two schemata annotate different features, their annotations often look largely similar. Consider the attested annotation of the Spanish word mandaba (I/he/she/it) commanded. tab:annotations shows that these annotations share many attributes.
Some conversions are straightforward: VERB to V, Mood=Ind to IND, Number=Sing to SG, and Person=3 to 3. One might also suggest mapping Tense=Imp to IPFV, though this crosses semantic categories: IPFV represents the imperfective aspect, whereas Tense=Imp comes from imperfect, the English name often given to Spanish's pasado continuo form. The imperfect is a verb form which combines both past tense and imperfective aspect. UniMorph chooses to split this into the atoms PST and IPFV, while UD unifies them according to the familiar name of the tense.
UD treebanks and UniMorph tables
Prima facie, the alignment task may seem trivial. But we've yet to explore the humans in the loop. This conversion is a hard problem because we're operating on idealized schemata. We're actually annotating human decisions—and human mistakes. If both schemata were perfectly applied, their overlapping attributes could be mapped to each other simply, in a cross-lingual and totally general way. Unfortunately, the resources are imperfect realizations of their schemata. The cross-lingual, cross-resource, and within-resource problems that we'll note mean that we need a tailor-made solution for each language.
Showcasing their schemata, the Universal Dependencies and UniMorph projects each present large, annotated datasets. UD's v2.1 release BIBREF1 has 102 treebanks in 60 languages. The large resource, constructed by independent parties, evinces problems in the goal of a universal inventory of annotations. Annotators may choose to omit certain values (like the coerced gender of refrescante in fig:disagreement), and they may disagree on how a linguistic concept is encoded. (See, e.g., BIBREF11 's ( BIBREF11 ) description of the dative case.) Additionally, many of the treebanks were created by fully- or semi-automatic conversion from treebanks with less comprehensive annotation schemata than UD BIBREF0 . For instance, the Spanish word vas you go is incorrectly labeled Gender: Fem|Number: Pl because it ends in a character sequence which is common among feminine plural nouns. (Nevertheless, the part of speech field for vas is correct.)
UniMorph's development is more centralized and pipelined. Inflectional paradigms are scraped from Wiktionary, annotators map positions in the scraped data to MSDs, and the mapping is automatically applied to all of the scraped paradigms. Because annotators handle languages they are familiar with (or related ones), realization of the schema is also done on a language-by-language basis. Further, the scraping process does not capture lexical aspects that are not inflected, like noun gender in many languages. The schema permits inclusion of these details; their absence is an artifact of the data collection process. Finally, UniMorph records only exist for nouns, verbs, and adjectives, though the schema is broader than these categories.
For these reasons, we treat the corpora as imperfect realizations of the schemata. Moreover, we contend that ambiguity in the schemata leave the door open to allow for such imperfections. With no strict guidance, it's natural that annotators would take different paths. Nevertheless, modulo annotator disagreement, we assume that within a particular corpus, one word form will always be consistently annotated.
Three categories of annotation difficulty are missing values, language-specific attributes, and multiword expressions.
A Deterministic Conversion
In our work, the goal is not simply to translate one schema into the other, but to translate one resource (the imperfect manifestation of the schema) to match the other. The differences between the schemata and discrepancies in annotation mean that the transformation of annotations from one schema to the other is not straightforward.
Two naive options for the conversion are a lookup table of MSDs and a lookup table of the individual attribute-value pairs which comprise the MSDs. The former is untenable: the table of all UD feature combinations (including null features, excluding language-specific attributes) would have 2.445e17 entries. Of course, most combinations won't exist, but this gives a sense of the table's scale. Also, it doesn't leverage the factorial nature of the annotations: constructing the table would require a massive duplication of effort. On the other hand, attribute-value lookup lacks the flexibility to show how a pair of values interacts. Neither approach would handle language- and annotator-specific tendencies in the corpora.
Our approach to converting UD MSDs to UniMorph MSDs begins with the attribute-value lookup, then amends it on a language-specific basis. Alterations informed by the MSD and the word form, like insertion, substitution, and deletion, increase the number of agreeing annotations. They are critical for work that examines the MSD monolithically instead of feature-by-feature BIBREF25 , BIBREF26 : Without exact matches, converting the individual tags becomes hollow.
Beginning our process, we relied on documentation of the two schemata to create our initial, language-agnostic mapping of individual values. This mapping has 140 pairs in it. Because the mapping was derived purely from the schemata, it is a useful approximation of how well the schemata match up. We note, however, that the mapping does not handle idiosyncrasies like the many uses of dative or features which are represented in UniMorph by argument templates: possession and ergative–absolutive argument marking. The initial step of our conversion is using this mapping to populate a proposed UniMorph MSD.
As shown in sec:results, the initial proposal is often frustratingly deficient. Thus we introduce the post-edits. To concoct these, we looked into UniMorph corpora for these languages, compared these to the conversion outputs, and then sought to bring the conversion outputs closer to the annotations in the actual UniMorph corpora. When a form and its lemma existed in both corpora, we could directly inspect how the annotations differed. Our process of iteratively refining the conversion implies a table which exactly maps any combination of UD MSD and its related values (lemma, form, etc.) to a UniMorph MSD, though we do not store the table explicitly.
Some conversion rules we've created must be applied before or after others. These sequential dependencies provide conciseness. Our post-editing procedure operates on the initial MSD hypothesis as follows:
Experiments
We evaluate our tool on two tasks:
To be clear, our scope is limited to the schema conversion. Future work will explore NLP tasks that exploit both the created token-level UniMorph data and the existing type-level UniMorph data.
Intrinsic evaluation
We transform all UD data to the UniMorph. We compare the simple lookup-based transformation to the one with linguistically informed post-edits on all languages with both UD and UniMorph data. We then evaluate the recall of MSDs without partial credit.
Because the UniMorph tables only possess annotations for verbs, nouns, adjectives, or some combination, we can only examine performance for these parts of speech. We consider two words to be a match if their form and lemma are present in both resources. Syncretism allows a single surface form to realize multiple MSDs (Spanish mandaba can be first- or third-person), so we define success as the computed MSD matching any of the word's UniMorph MSDs. This gives rise to an equation for recall: of the word–lemma pairs found in both resources, how many of their UniMorph-converted MSDs are present in the UniMorph tables?
Our problem here is not a learning problem, so the question is ill-posed. There is no training set, and the two resources for a given language make up a test set. The quality of our model—the conversion tool—comes from how well we encode prior knowledge about the relationship between the UD and UniMorph corpora.
Extrinsic evaluation
If the UniMorph-converted treebanks perform differently on downstream tasks, then they convey different information. This signals a failure of the conversion process. As a downstream task, we choose morphological tagging, a critical step to leveraging morphological information on new text.
We evaluate taggers trained on the transformed UD data, choosing eight languages randomly from the intersection of UD and UniMorph resources. We report the macro-averaged F1 score of attribute-value pairs on a held-out test set, with official train/validation/test splits provided in the UD treebanks. As a reference point, we also report tagging accuracy on those languages' untransformed data.
We use the state-of-the-art morphological tagger of BIBREF0 . It is a factored conditional random field with potentials for each attribute, attribute pair, and attribute transition. The potentials are computed by neural networks, predicting the values of each attribute jointly but not monolithically. Inference with the potentials is performed approximately by loopy belief propagation. We use the authors' hyperparameters.
We note a minor implementation detail for the sake of reproducibility. The tagger exploits explicit guidance about the attribute each value pertains to. The UniMorph schema's values are globally unique, but their attributes are not explicit. For example, the UniMorph Masc denotes a masculine gender. We amend the code of BIBREF0 to incorporate attribute identifiers for each UniMorph value.
Results
We present the intrinsic task's recall scores in tab:recall. Bear in mind that due to annotation errors in the original corpora (like the vas example from sec:resources), the optimal score is not always $100\%$ . Some shortcomings of recall come from irremediable annotation discrepancies. Largely, we are hamstrung by differences in choice of attributes to annotate. When one resource marks gender and the other marks case, we can't infer the gender of the word purely from its surface form. The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them: Arabic, Hindi, Lithuanian, Persian, and Russian. A full list of observed, irremediable discrepancies is presented alongside the codebase.
There are three other transformations for which we note no improvement here. Because of the problem in Basque argument encoding in the UniMorph dataset—which only contains verbs—we note no improvement in recall on Basque. Irish also does not improve: UD marks gender on nouns, while UniMorph marks case. Adjectives in UD are also underspecified. The verbs, though, are already correct with the simple mapping. Finally, with Dutch, the UD annotations are impoverished compared to the UniMorph annotations, and missing attributes cannot be inferred without external knowledge.
For the extrinsic task, the performance is reasonably similar whether UniMorph or UD; see tab:tagging. A large fluctuation would suggest that the two annotations encode distinct information. On the contrary, the similarities suggest that the UniMorph-mapped MSDs have similar content. We recognize that in every case, tagging F1 increased—albeit by amounts as small as $0.16$ points. This is in part due to the information that is lost in the conversion. UniMorph's schema does not indicate the type of pronoun (demonstrative, interrogative, etc.), and when lexical information is not recorded in UniMorph, we delete it from the MSD during transformation. On the other hand, UniMorph's atomic tags have more parts to guess, but they are often related. (E.g. Ipfv always entails Pst in Spanish.) Altogether, these forces seem to have little impact on tagging performance.
Related Work
The goal of a tagset-to-tagset mapping of morphological annotations is shared by the Interset project BIBREF28 . Interset decodes features in the source corpus to a tag interlingua, then encodes that into target corpus features. (The idea of an interlingua is drawn from machine translation, where a prevailing early mindset was to convert to a universal representation, then encode that representation's semantics in the target language. Our approach, by contrast, is a direct flight from the source to the target.) Because UniMorph corpora are noisy, the encoding from the interlingua would have to be rewritten for each target. Further, decoding the UD MSD into the interlingua cannot leverage external information like the lemma and form.
The creators of HamleDT sought to harmonize dependency annotations among treebanks, similar to our goal of harmonizing across resources BIBREF29 . The treebanks they sought to harmonize used multiple diverse annotation schemes, which the authors unified under a single scheme.
BIBREF30 present mappings into a coarse, universal part of speech for 22 languages. Working with POS tags rather than morphological tags (which have far more dimensions), their space of options to harmonize is much smaller than ours.
Our extrinsic evaluation is most in line with the paradigm of BIBREF31 (and similar work therein), who compare syntactic parser performance on UD treebanks annotated with two styles of dependency representation. Our problem differs, though, in that the dependency representations express different relationships, while our two schemata vastly overlap. As our conversion is lossy, we do not appraise the learnability of representations as they did.
In addition to using the number of extra rules as a proxy for harmony between resources, one could perform cross-lingual projection of morphological tags BIBREF32 , BIBREF33 . Our approach succeeds even without parallel corpora.
Conclusion and Future Work
We created a tool for annotating Universal Dependencies CoNLL-U files with UniMorph annotations. Our tool is ready to use off-the-shelf today, requires no training, and is deterministic. While under-specification necessitates a lossy and imperfect conversion, ours is interpretable. Patterns of mistakes can be identified and ameliorated.
The tool allows a bridge between resources annotated in the Universal Dependencies and Universal Morphology (UniMorph) schemata. As the Universal Dependencies project provides a set of treebanks with token-level annotation, while the UniMorph project releases type-level annotated tables, the newfound compatibility opens up new experiments. A prime example of exploiting token- and type-level data is BIBREF34 . That work presents a part-of-speech (POS) dictionary built from Wiktionary, where the POS tagger is also constrained to options available in their type-level POS dictionary, improving performance. Our transformation means that datasets are prepared for similar experiments with morphological tagging. It would also be reasonable to incorporate this tool as a subroutine to UDPipe BIBREF35 and Udapi BIBREF36 . We leave open the task of converting in the opposite direction, turning UniMorph MSDs into Universal Dependencies MSDs.
Because our conversion rules are interpretable, we identify shortcomings in both resources, using each as validation for the other. We were able to find specific instances of incorrectly applied UniMorph annotation, as well as specific instances of cross-lingual inconsistency in both resources. These findings will harden both resources and better align them with their goal of universal, cross-lingual annotation.
Acknowledgments
We thank Hajime Senuma and John Sylak-Glassman for early comments in devising the starting language-independent mapping from Universal Dependencies to UniMorph. | irremediable annotation discrepancies, differences in choice of attributes to annotate, The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them, the two annotations encode distinct information, incorrectly applied UniMorph annotation, cross-lingual inconsistency in both resources |
e477e494fe15a978ff9c0a5f1c88712cdaec0c5c | e477e494fe15a978ff9c0a5f1c88712cdaec0c5c_0 | Q: Do they look for inconsistencies between different languages' annotations in UniMorph?
Text: Introduction
The two largest standardized, cross-lingual datasets for morphological annotation are provided by the Universal Dependencies BIBREF1 and Universal Morphology BIBREF2 , BIBREF3 projects. Each project's data are annotated according to its own cross-lingual schema, prescribing how features like gender or case should be marked. The schemata capture largely similar information, so one may want to leverage both UD's token-level treebanks and UniMorph's type-level lookup tables and unify the two resources. This would permit a leveraging of both the token-level UD treebanks and the type-level UniMorph tables of paradigms. Unfortunately, neither resource perfectly realizes its schema. On a dataset-by-dataset basis, they incorporate annotator errors, omissions, and human decisions when the schemata are underspecified; one such example is in fig:disagreement.
A dataset-by-dataset problem demands a dataset-by-dataset solution; our task is not to translate a schema, but to translate a resource. Starting from the idealized schema, we create a rule-based tool for converting UD-schema annotations to UniMorph annotations, incorporating language-specific post-edits that both correct infelicities and also increase harmony between the datasets themselves (rather than the schemata). We apply this conversion to the 31 languages with both UD and UniMorph data, and we report our method's recall, showing an improvement over the strategy which just maps corresponding schematic features to each other. Further, we show similar downstream performance for each annotation scheme in the task of morphological tagging.
This tool enables a synergistic use of UniMorph and Universal Dependencies, as well as teasing out the annotation discrepancies within and across projects. When one dataset disobeys its schema or disagrees with a related language, the flaws may not be noticed except by such a methodological dive into the resources. When the maintainers of the resources ameliorate these flaws, the resources move closer to the goal of a universal, cross-lingual inventory of features for morphological annotation.
The contributions of this work are:
Background: Morphological Inflection
Morphological inflection is the act of altering the base form of a word (the lemma, represented in fixed-width type) to encode morphosyntactic features. As an example from English, prove takes on the form proved to indicate that the action occurred in the past. (We will represent all surface forms in quotation marks.) The process occurs in the majority of the world's widely-spoken languages, typically through meaningful affixes. The breadth of forms created by inflection creates a challenge of data sparsity for natural language processing: The likelihood of observing a particular word form diminishes.
A classic result in psycholinguistics BIBREF4 shows that inflectional morphology is a fully productive process. Indeed, it cannot be that humans simply have the equivalent of a lookup table, where they store the inflected forms for retrieval as the syntactic context requires. Instead, there needs to be a mental process that can generate properly inflected words on demand. BIBREF4 showed this insightfully through the wug-test, an experiment where she forced participants to correctly inflect out-of-vocabulary lemmata, such as the novel noun wug.
Certain features of a word do not vary depending on its context: In German or Spanish where nouns are gendered, the word for onion will always be grammatically feminine. Thus, to prepare for later discussion, we divide the morphological features of a word into two categories: the modifiable inflectional features and the fixed lexical features.
A part of speech (POS) is a coarse syntactic category (like verb) that begets a word's particular menu of lexical and inflectional features. In English, verbs express no gender, and adjectives do not reflect person or number. The part of speech dictates a set of inflectional slots to be filled by the surface forms. Completing these slots for a given lemma and part of speech gives a paradigm: a mapping from slots to surface forms. Regular English verbs have five slots in their paradigm BIBREF5 , which we illustrate for the verb prove, using simple labels for the forms in tab:ptb.
A morphosyntactic schema prescribes how language can be annotated—giving stricter categories than our simple labels for prove—and can vary in the level of detail provided. Part of speech tags are an example of a very coarse schema, ignoring details of person, gender, and number. A slightly finer-grained schema for English is the Penn Treebank tagset BIBREF6 , which includes signals for English morphology. For instance, its VBZ tag pertains to the specially inflected 3rd-person singular, present-tense verb form (e.g. proves in tab:ptb).
If the tag in a schema is detailed enough that it exactly specifies a slot in a paradigm, it is called a morphosyntactic description (MSD). These descriptions require varying amounts of detail: While the English verbal paradigm is small enough to fit on a page, the verbal paradigm of the Northeast Caucasian language Archi can have over 1500000 slots BIBREF7 .
Two Schemata, Two Philosophies
Unlike the Penn Treebank tags, the UD and UniMorph schemata are cross-lingual and include a fuller lexicon of attribute-value pairs, such as Person: 1. Each was built according to a different set of principles. UD's schema is constructed bottom-up, adapting to include new features when they're identified in languages. UniMorph, conversely, is top-down: A cross-lingual survey of the literature of morphological phenomena guided its design. UniMorph aims to be linguistically complete, containing all known morphosyntactic attributes. Both schemata share one long-term goal: a total inventory for annotating the possible morphosyntactic features of a word.
Universal Dependencies
The Universal Dependencies morphological schema comprises part of speech and 23 additional attributes (also called features in UD) annotating meaning or syntax, as well as language-specific attributes. In order to ensure consistent annotation, attributes are included into the general UD schema if they occur in several corpora. Language-specific attributes are used when only one corpus annotates for a specific feature.
The UD schema seeks to balance language-specific and cross-lingual concerns. It annotates for both inflectional features such as case and lexical features such as gender. Additionally, the UD schema annotates for features which can be interpreted as derivational in some languages. For example, the Czech UD guidance uses a Coll value for the Number feature to denote mass nouns (for example, "lidstvo" "humankind" from the root "lid" "people").
UD represents a confederation of datasets BIBREF8 annotated with dependency relationships (which are not the focus of this work) and morphosyntactic descriptions. Each dataset is an annotated treebank, making it a resource of token-level annotations. The schema is guided by these treebanks, with feature names chosen for relevance to native speakers. (In sec:unimorph, we will contrast this with UniMorph's treatment of morphosyntactic categories.) The UD datasets have been used in the CoNLL shared tasks BIBREF9 .
UniMorph
In the Universal Morphological Feature Schema BIBREF10 , there are at least 212 values, spread across 23 attributes. It identifies some attributes that UD excludes like information structure and deixis, as well as providing more values for certain attributes, like 23 different noun classes endemic to Bantu languages. As it is a schema for marking morphology, its part of speech attribute does not have POS values for punctuation, symbols, or miscellany (Punct, Sym, and X in Universal Dependencies).
Like the UD schema, the decomposition of a word into its lemma and MSD is directly comparable across languages. Its features are informed by a distinction between universal categories, which are widespread and psychologically real to speakers; and comparative concepts, only used by linguistic typologists to compare languages BIBREF11 . Additionally, it strives for identity of meaning across languages, not simply similarity of terminology. As a prime example, it does not regularly label a dative case for nouns, for reasons explained in depth by BIBREF11 .
The UniMorph resources for a language contain complete paradigms extracted from Wiktionary BIBREF12 , BIBREF13 . Word types are annotated to form a database, mapping a lemma–tag pair to a surface form. The schema is explained in detail in BIBREF10 . It has been used in the SIGMORPHON shared task BIBREF14 and the CoNLL–SIGMORPHON shared tasks BIBREF15 , BIBREF16 . Several components of the UniMorph schema have been adopted by UD.
Similarities in the annotation
While the two schemata annotate different features, their annotations often look largely similar. Consider the attested annotation of the Spanish word mandaba (I/he/she/it) commanded. tab:annotations shows that these annotations share many attributes.
Some conversions are straightforward: VERB to V, Mood=Ind to IND, Number=Sing to SG, and Person=3 to 3. One might also suggest mapping Tense=Imp to IPFV, though this crosses semantic categories: IPFV represents the imperfective aspect, whereas Tense=Imp comes from imperfect, the English name often given to Spanish's pasado continuo form. The imperfect is a verb form which combines both past tense and imperfective aspect. UniMorph chooses to split this into the atoms PST and IPFV, while UD unifies them according to the familiar name of the tense.
UD treebanks and UniMorph tables
Prima facie, the alignment task may seem trivial. But we've yet to explore the humans in the loop. This conversion is a hard problem because we're operating on idealized schemata. We're actually annotating human decisions—and human mistakes. If both schemata were perfectly applied, their overlapping attributes could be mapped to each other simply, in a cross-lingual and totally general way. Unfortunately, the resources are imperfect realizations of their schemata. The cross-lingual, cross-resource, and within-resource problems that we'll note mean that we need a tailor-made solution for each language.
Showcasing their schemata, the Universal Dependencies and UniMorph projects each present large, annotated datasets. UD's v2.1 release BIBREF1 has 102 treebanks in 60 languages. The large resource, constructed by independent parties, evinces problems in the goal of a universal inventory of annotations. Annotators may choose to omit certain values (like the coerced gender of refrescante in fig:disagreement), and they may disagree on how a linguistic concept is encoded. (See, e.g., BIBREF11 's ( BIBREF11 ) description of the dative case.) Additionally, many of the treebanks were created by fully- or semi-automatic conversion from treebanks with less comprehensive annotation schemata than UD BIBREF0 . For instance, the Spanish word vas you go is incorrectly labeled Gender: Fem|Number: Pl because it ends in a character sequence which is common among feminine plural nouns. (Nevertheless, the part of speech field for vas is correct.)
UniMorph's development is more centralized and pipelined. Inflectional paradigms are scraped from Wiktionary, annotators map positions in the scraped data to MSDs, and the mapping is automatically applied to all of the scraped paradigms. Because annotators handle languages they are familiar with (or related ones), realization of the schema is also done on a language-by-language basis. Further, the scraping process does not capture lexical aspects that are not inflected, like noun gender in many languages. The schema permits inclusion of these details; their absence is an artifact of the data collection process. Finally, UniMorph records only exist for nouns, verbs, and adjectives, though the schema is broader than these categories.
For these reasons, we treat the corpora as imperfect realizations of the schemata. Moreover, we contend that ambiguity in the schemata leave the door open to allow for such imperfections. With no strict guidance, it's natural that annotators would take different paths. Nevertheless, modulo annotator disagreement, we assume that within a particular corpus, one word form will always be consistently annotated.
Three categories of annotation difficulty are missing values, language-specific attributes, and multiword expressions.
A Deterministic Conversion
In our work, the goal is not simply to translate one schema into the other, but to translate one resource (the imperfect manifestation of the schema) to match the other. The differences between the schemata and discrepancies in annotation mean that the transformation of annotations from one schema to the other is not straightforward.
Two naive options for the conversion are a lookup table of MSDs and a lookup table of the individual attribute-value pairs which comprise the MSDs. The former is untenable: the table of all UD feature combinations (including null features, excluding language-specific attributes) would have 2.445e17 entries. Of course, most combinations won't exist, but this gives a sense of the table's scale. Also, it doesn't leverage the factorial nature of the annotations: constructing the table would require a massive duplication of effort. On the other hand, attribute-value lookup lacks the flexibility to show how a pair of values interacts. Neither approach would handle language- and annotator-specific tendencies in the corpora.
Our approach to converting UD MSDs to UniMorph MSDs begins with the attribute-value lookup, then amends it on a language-specific basis. Alterations informed by the MSD and the word form, like insertion, substitution, and deletion, increase the number of agreeing annotations. They are critical for work that examines the MSD monolithically instead of feature-by-feature BIBREF25 , BIBREF26 : Without exact matches, converting the individual tags becomes hollow.
Beginning our process, we relied on documentation of the two schemata to create our initial, language-agnostic mapping of individual values. This mapping has 140 pairs in it. Because the mapping was derived purely from the schemata, it is a useful approximation of how well the schemata match up. We note, however, that the mapping does not handle idiosyncrasies like the many uses of dative or features which are represented in UniMorph by argument templates: possession and ergative–absolutive argument marking. The initial step of our conversion is using this mapping to populate a proposed UniMorph MSD.
As shown in sec:results, the initial proposal is often frustratingly deficient. Thus we introduce the post-edits. To concoct these, we looked into UniMorph corpora for these languages, compared these to the conversion outputs, and then sought to bring the conversion outputs closer to the annotations in the actual UniMorph corpora. When a form and its lemma existed in both corpora, we could directly inspect how the annotations differed. Our process of iteratively refining the conversion implies a table which exactly maps any combination of UD MSD and its related values (lemma, form, etc.) to a UniMorph MSD, though we do not store the table explicitly.
Some conversion rules we've created must be applied before or after others. These sequential dependencies provide conciseness. Our post-editing procedure operates on the initial MSD hypothesis as follows:
Experiments
We evaluate our tool on two tasks:
To be clear, our scope is limited to the schema conversion. Future work will explore NLP tasks that exploit both the created token-level UniMorph data and the existing type-level UniMorph data.
Intrinsic evaluation
We transform all UD data to the UniMorph. We compare the simple lookup-based transformation to the one with linguistically informed post-edits on all languages with both UD and UniMorph data. We then evaluate the recall of MSDs without partial credit.
Because the UniMorph tables only possess annotations for verbs, nouns, adjectives, or some combination, we can only examine performance for these parts of speech. We consider two words to be a match if their form and lemma are present in both resources. Syncretism allows a single surface form to realize multiple MSDs (Spanish mandaba can be first- or third-person), so we define success as the computed MSD matching any of the word's UniMorph MSDs. This gives rise to an equation for recall: of the word–lemma pairs found in both resources, how many of their UniMorph-converted MSDs are present in the UniMorph tables?
Our problem here is not a learning problem, so the question is ill-posed. There is no training set, and the two resources for a given language make up a test set. The quality of our model—the conversion tool—comes from how well we encode prior knowledge about the relationship between the UD and UniMorph corpora.
Extrinsic evaluation
If the UniMorph-converted treebanks perform differently on downstream tasks, then they convey different information. This signals a failure of the conversion process. As a downstream task, we choose morphological tagging, a critical step to leveraging morphological information on new text.
We evaluate taggers trained on the transformed UD data, choosing eight languages randomly from the intersection of UD and UniMorph resources. We report the macro-averaged F1 score of attribute-value pairs on a held-out test set, with official train/validation/test splits provided in the UD treebanks. As a reference point, we also report tagging accuracy on those languages' untransformed data.
We use the state-of-the-art morphological tagger of BIBREF0 . It is a factored conditional random field with potentials for each attribute, attribute pair, and attribute transition. The potentials are computed by neural networks, predicting the values of each attribute jointly but not monolithically. Inference with the potentials is performed approximately by loopy belief propagation. We use the authors' hyperparameters.
We note a minor implementation detail for the sake of reproducibility. The tagger exploits explicit guidance about the attribute each value pertains to. The UniMorph schema's values are globally unique, but their attributes are not explicit. For example, the UniMorph Masc denotes a masculine gender. We amend the code of BIBREF0 to incorporate attribute identifiers for each UniMorph value.
Results
We present the intrinsic task's recall scores in tab:recall. Bear in mind that due to annotation errors in the original corpora (like the vas example from sec:resources), the optimal score is not always $100\%$ . Some shortcomings of recall come from irremediable annotation discrepancies. Largely, we are hamstrung by differences in choice of attributes to annotate. When one resource marks gender and the other marks case, we can't infer the gender of the word purely from its surface form. The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them: Arabic, Hindi, Lithuanian, Persian, and Russian. A full list of observed, irremediable discrepancies is presented alongside the codebase.
There are three other transformations for which we note no improvement here. Because of the problem in Basque argument encoding in the UniMorph dataset—which only contains verbs—we note no improvement in recall on Basque. Irish also does not improve: UD marks gender on nouns, while UniMorph marks case. Adjectives in UD are also underspecified. The verbs, though, are already correct with the simple mapping. Finally, with Dutch, the UD annotations are impoverished compared to the UniMorph annotations, and missing attributes cannot be inferred without external knowledge.
For the extrinsic task, the performance is reasonably similar whether UniMorph or UD; see tab:tagging. A large fluctuation would suggest that the two annotations encode distinct information. On the contrary, the similarities suggest that the UniMorph-mapped MSDs have similar content. We recognize that in every case, tagging F1 increased—albeit by amounts as small as $0.16$ points. This is in part due to the information that is lost in the conversion. UniMorph's schema does not indicate the type of pronoun (demonstrative, interrogative, etc.), and when lexical information is not recorded in UniMorph, we delete it from the MSD during transformation. On the other hand, UniMorph's atomic tags have more parts to guess, but they are often related. (E.g. Ipfv always entails Pst in Spanish.) Altogether, these forces seem to have little impact on tagging performance.
Related Work
The goal of a tagset-to-tagset mapping of morphological annotations is shared by the Interset project BIBREF28 . Interset decodes features in the source corpus to a tag interlingua, then encodes that into target corpus features. (The idea of an interlingua is drawn from machine translation, where a prevailing early mindset was to convert to a universal representation, then encode that representation's semantics in the target language. Our approach, by contrast, is a direct flight from the source to the target.) Because UniMorph corpora are noisy, the encoding from the interlingua would have to be rewritten for each target. Further, decoding the UD MSD into the interlingua cannot leverage external information like the lemma and form.
The creators of HamleDT sought to harmonize dependency annotations among treebanks, similar to our goal of harmonizing across resources BIBREF29 . The treebanks they sought to harmonize used multiple diverse annotation schemes, which the authors unified under a single scheme.
BIBREF30 present mappings into a coarse, universal part of speech for 22 languages. Working with POS tags rather than morphological tags (which have far more dimensions), their space of options to harmonize is much smaller than ours.
Our extrinsic evaluation is most in line with the paradigm of BIBREF31 (and similar work therein), who compare syntactic parser performance on UD treebanks annotated with two styles of dependency representation. Our problem differs, though, in that the dependency representations express different relationships, while our two schemata vastly overlap. As our conversion is lossy, we do not appraise the learnability of representations as they did.
In addition to using the number of extra rules as a proxy for harmony between resources, one could perform cross-lingual projection of morphological tags BIBREF32 , BIBREF33 . Our approach succeeds even without parallel corpora.
Conclusion and Future Work
We created a tool for annotating Universal Dependencies CoNLL-U files with UniMorph annotations. Our tool is ready to use off-the-shelf today, requires no training, and is deterministic. While under-specification necessitates a lossy and imperfect conversion, ours is interpretable. Patterns of mistakes can be identified and ameliorated.
The tool allows a bridge between resources annotated in the Universal Dependencies and Universal Morphology (UniMorph) schemata. As the Universal Dependencies project provides a set of treebanks with token-level annotation, while the UniMorph project releases type-level annotated tables, the newfound compatibility opens up new experiments. A prime example of exploiting token- and type-level data is BIBREF34 . That work presents a part-of-speech (POS) dictionary built from Wiktionary, where the POS tagger is also constrained to options available in their type-level POS dictionary, improving performance. Our transformation means that datasets are prepared for similar experiments with morphological tagging. It would also be reasonable to incorporate this tool as a subroutine to UDPipe BIBREF35 and Udapi BIBREF36 . We leave open the task of converting in the opposite direction, turning UniMorph MSDs into Universal Dependencies MSDs.
Because our conversion rules are interpretable, we identify shortcomings in both resources, using each as validation for the other. We were able to find specific instances of incorrectly applied UniMorph annotation, as well as specific instances of cross-lingual inconsistency in both resources. These findings will harden both resources and better align them with their goal of universal, cross-lingual annotation.
Acknowledgments
We thank Hajime Senuma and John Sylak-Glassman for early comments in devising the starting language-independent mapping from Universal Dependencies to UniMorph. | Yes |
04495845251b387335bf2e77e2c423130f43c7d9 | 04495845251b387335bf2e77e2c423130f43c7d9_0 | Q: Do they look for inconsistencies between different UD treebanks?
Text: Introduction
The two largest standardized, cross-lingual datasets for morphological annotation are provided by the Universal Dependencies BIBREF1 and Universal Morphology BIBREF2 , BIBREF3 projects. Each project's data are annotated according to its own cross-lingual schema, prescribing how features like gender or case should be marked. The schemata capture largely similar information, so one may want to leverage both UD's token-level treebanks and UniMorph's type-level lookup tables and unify the two resources. This would permit a leveraging of both the token-level UD treebanks and the type-level UniMorph tables of paradigms. Unfortunately, neither resource perfectly realizes its schema. On a dataset-by-dataset basis, they incorporate annotator errors, omissions, and human decisions when the schemata are underspecified; one such example is in fig:disagreement.
A dataset-by-dataset problem demands a dataset-by-dataset solution; our task is not to translate a schema, but to translate a resource. Starting from the idealized schema, we create a rule-based tool for converting UD-schema annotations to UniMorph annotations, incorporating language-specific post-edits that both correct infelicities and also increase harmony between the datasets themselves (rather than the schemata). We apply this conversion to the 31 languages with both UD and UniMorph data, and we report our method's recall, showing an improvement over the strategy which just maps corresponding schematic features to each other. Further, we show similar downstream performance for each annotation scheme in the task of morphological tagging.
This tool enables a synergistic use of UniMorph and Universal Dependencies, as well as teasing out the annotation discrepancies within and across projects. When one dataset disobeys its schema or disagrees with a related language, the flaws may not be noticed except by such a methodological dive into the resources. When the maintainers of the resources ameliorate these flaws, the resources move closer to the goal of a universal, cross-lingual inventory of features for morphological annotation.
The contributions of this work are:
Background: Morphological Inflection
Morphological inflection is the act of altering the base form of a word (the lemma, represented in fixed-width type) to encode morphosyntactic features. As an example from English, prove takes on the form proved to indicate that the action occurred in the past. (We will represent all surface forms in quotation marks.) The process occurs in the majority of the world's widely-spoken languages, typically through meaningful affixes. The breadth of forms created by inflection creates a challenge of data sparsity for natural language processing: The likelihood of observing a particular word form diminishes.
A classic result in psycholinguistics BIBREF4 shows that inflectional morphology is a fully productive process. Indeed, it cannot be that humans simply have the equivalent of a lookup table, where they store the inflected forms for retrieval as the syntactic context requires. Instead, there needs to be a mental process that can generate properly inflected words on demand. BIBREF4 showed this insightfully through the wug-test, an experiment where she forced participants to correctly inflect out-of-vocabulary lemmata, such as the novel noun wug.
Certain features of a word do not vary depending on its context: In German or Spanish where nouns are gendered, the word for onion will always be grammatically feminine. Thus, to prepare for later discussion, we divide the morphological features of a word into two categories: the modifiable inflectional features and the fixed lexical features.
A part of speech (POS) is a coarse syntactic category (like verb) that begets a word's particular menu of lexical and inflectional features. In English, verbs express no gender, and adjectives do not reflect person or number. The part of speech dictates a set of inflectional slots to be filled by the surface forms. Completing these slots for a given lemma and part of speech gives a paradigm: a mapping from slots to surface forms. Regular English verbs have five slots in their paradigm BIBREF5 , which we illustrate for the verb prove, using simple labels for the forms in tab:ptb.
A morphosyntactic schema prescribes how language can be annotated—giving stricter categories than our simple labels for prove—and can vary in the level of detail provided. Part of speech tags are an example of a very coarse schema, ignoring details of person, gender, and number. A slightly finer-grained schema for English is the Penn Treebank tagset BIBREF6 , which includes signals for English morphology. For instance, its VBZ tag pertains to the specially inflected 3rd-person singular, present-tense verb form (e.g. proves in tab:ptb).
If the tag in a schema is detailed enough that it exactly specifies a slot in a paradigm, it is called a morphosyntactic description (MSD). These descriptions require varying amounts of detail: While the English verbal paradigm is small enough to fit on a page, the verbal paradigm of the Northeast Caucasian language Archi can have over 1500000 slots BIBREF7 .
Two Schemata, Two Philosophies
Unlike the Penn Treebank tags, the UD and UniMorph schemata are cross-lingual and include a fuller lexicon of attribute-value pairs, such as Person: 1. Each was built according to a different set of principles. UD's schema is constructed bottom-up, adapting to include new features when they're identified in languages. UniMorph, conversely, is top-down: A cross-lingual survey of the literature of morphological phenomena guided its design. UniMorph aims to be linguistically complete, containing all known morphosyntactic attributes. Both schemata share one long-term goal: a total inventory for annotating the possible morphosyntactic features of a word.
Universal Dependencies
The Universal Dependencies morphological schema comprises part of speech and 23 additional attributes (also called features in UD) annotating meaning or syntax, as well as language-specific attributes. In order to ensure consistent annotation, attributes are included into the general UD schema if they occur in several corpora. Language-specific attributes are used when only one corpus annotates for a specific feature.
The UD schema seeks to balance language-specific and cross-lingual concerns. It annotates for both inflectional features such as case and lexical features such as gender. Additionally, the UD schema annotates for features which can be interpreted as derivational in some languages. For example, the Czech UD guidance uses a Coll value for the Number feature to denote mass nouns (for example, "lidstvo" "humankind" from the root "lid" "people").
UD represents a confederation of datasets BIBREF8 annotated with dependency relationships (which are not the focus of this work) and morphosyntactic descriptions. Each dataset is an annotated treebank, making it a resource of token-level annotations. The schema is guided by these treebanks, with feature names chosen for relevance to native speakers. (In sec:unimorph, we will contrast this with UniMorph's treatment of morphosyntactic categories.) The UD datasets have been used in the CoNLL shared tasks BIBREF9 .
UniMorph
In the Universal Morphological Feature Schema BIBREF10 , there are at least 212 values, spread across 23 attributes. It identifies some attributes that UD excludes like information structure and deixis, as well as providing more values for certain attributes, like 23 different noun classes endemic to Bantu languages. As it is a schema for marking morphology, its part of speech attribute does not have POS values for punctuation, symbols, or miscellany (Punct, Sym, and X in Universal Dependencies).
Like the UD schema, the decomposition of a word into its lemma and MSD is directly comparable across languages. Its features are informed by a distinction between universal categories, which are widespread and psychologically real to speakers; and comparative concepts, only used by linguistic typologists to compare languages BIBREF11 . Additionally, it strives for identity of meaning across languages, not simply similarity of terminology. As a prime example, it does not regularly label a dative case for nouns, for reasons explained in depth by BIBREF11 .
The UniMorph resources for a language contain complete paradigms extracted from Wiktionary BIBREF12 , BIBREF13 . Word types are annotated to form a database, mapping a lemma–tag pair to a surface form. The schema is explained in detail in BIBREF10 . It has been used in the SIGMORPHON shared task BIBREF14 and the CoNLL–SIGMORPHON shared tasks BIBREF15 , BIBREF16 . Several components of the UniMorph schema have been adopted by UD.
Similarities in the annotation
While the two schemata annotate different features, their annotations often look largely similar. Consider the attested annotation of the Spanish word mandaba (I/he/she/it) commanded. tab:annotations shows that these annotations share many attributes.
Some conversions are straightforward: VERB to V, Mood=Ind to IND, Number=Sing to SG, and Person=3 to 3. One might also suggest mapping Tense=Imp to IPFV, though this crosses semantic categories: IPFV represents the imperfective aspect, whereas Tense=Imp comes from imperfect, the English name often given to Spanish's pasado continuo form. The imperfect is a verb form which combines both past tense and imperfective aspect. UniMorph chooses to split this into the atoms PST and IPFV, while UD unifies them according to the familiar name of the tense.
UD treebanks and UniMorph tables
Prima facie, the alignment task may seem trivial. But we've yet to explore the humans in the loop. This conversion is a hard problem because we're operating on idealized schemata. We're actually annotating human decisions—and human mistakes. If both schemata were perfectly applied, their overlapping attributes could be mapped to each other simply, in a cross-lingual and totally general way. Unfortunately, the resources are imperfect realizations of their schemata. The cross-lingual, cross-resource, and within-resource problems that we'll note mean that we need a tailor-made solution for each language.
Showcasing their schemata, the Universal Dependencies and UniMorph projects each present large, annotated datasets. UD's v2.1 release BIBREF1 has 102 treebanks in 60 languages. The large resource, constructed by independent parties, evinces problems in the goal of a universal inventory of annotations. Annotators may choose to omit certain values (like the coerced gender of refrescante in fig:disagreement), and they may disagree on how a linguistic concept is encoded. (See, e.g., BIBREF11 's ( BIBREF11 ) description of the dative case.) Additionally, many of the treebanks were created by fully- or semi-automatic conversion from treebanks with less comprehensive annotation schemata than UD BIBREF0 . For instance, the Spanish word vas you go is incorrectly labeled Gender: Fem|Number: Pl because it ends in a character sequence which is common among feminine plural nouns. (Nevertheless, the part of speech field for vas is correct.)
UniMorph's development is more centralized and pipelined. Inflectional paradigms are scraped from Wiktionary, annotators map positions in the scraped data to MSDs, and the mapping is automatically applied to all of the scraped paradigms. Because annotators handle languages they are familiar with (or related ones), realization of the schema is also done on a language-by-language basis. Further, the scraping process does not capture lexical aspects that are not inflected, like noun gender in many languages. The schema permits inclusion of these details; their absence is an artifact of the data collection process. Finally, UniMorph records only exist for nouns, verbs, and adjectives, though the schema is broader than these categories.
For these reasons, we treat the corpora as imperfect realizations of the schemata. Moreover, we contend that ambiguity in the schemata leave the door open to allow for such imperfections. With no strict guidance, it's natural that annotators would take different paths. Nevertheless, modulo annotator disagreement, we assume that within a particular corpus, one word form will always be consistently annotated.
Three categories of annotation difficulty are missing values, language-specific attributes, and multiword expressions.
A Deterministic Conversion
In our work, the goal is not simply to translate one schema into the other, but to translate one resource (the imperfect manifestation of the schema) to match the other. The differences between the schemata and discrepancies in annotation mean that the transformation of annotations from one schema to the other is not straightforward.
Two naive options for the conversion are a lookup table of MSDs and a lookup table of the individual attribute-value pairs which comprise the MSDs. The former is untenable: the table of all UD feature combinations (including null features, excluding language-specific attributes) would have 2.445e17 entries. Of course, most combinations won't exist, but this gives a sense of the table's scale. Also, it doesn't leverage the factorial nature of the annotations: constructing the table would require a massive duplication of effort. On the other hand, attribute-value lookup lacks the flexibility to show how a pair of values interacts. Neither approach would handle language- and annotator-specific tendencies in the corpora.
Our approach to converting UD MSDs to UniMorph MSDs begins with the attribute-value lookup, then amends it on a language-specific basis. Alterations informed by the MSD and the word form, like insertion, substitution, and deletion, increase the number of agreeing annotations. They are critical for work that examines the MSD monolithically instead of feature-by-feature BIBREF25 , BIBREF26 : Without exact matches, converting the individual tags becomes hollow.
Beginning our process, we relied on documentation of the two schemata to create our initial, language-agnostic mapping of individual values. This mapping has 140 pairs in it. Because the mapping was derived purely from the schemata, it is a useful approximation of how well the schemata match up. We note, however, that the mapping does not handle idiosyncrasies like the many uses of dative or features which are represented in UniMorph by argument templates: possession and ergative–absolutive argument marking. The initial step of our conversion is using this mapping to populate a proposed UniMorph MSD.
As shown in sec:results, the initial proposal is often frustratingly deficient. Thus we introduce the post-edits. To concoct these, we looked into UniMorph corpora for these languages, compared these to the conversion outputs, and then sought to bring the conversion outputs closer to the annotations in the actual UniMorph corpora. When a form and its lemma existed in both corpora, we could directly inspect how the annotations differed. Our process of iteratively refining the conversion implies a table which exactly maps any combination of UD MSD and its related values (lemma, form, etc.) to a UniMorph MSD, though we do not store the table explicitly.
Some conversion rules we've created must be applied before or after others. These sequential dependencies provide conciseness. Our post-editing procedure operates on the initial MSD hypothesis as follows:
Experiments
We evaluate our tool on two tasks:
To be clear, our scope is limited to the schema conversion. Future work will explore NLP tasks that exploit both the created token-level UniMorph data and the existing type-level UniMorph data.
Intrinsic evaluation
We transform all UD data to the UniMorph. We compare the simple lookup-based transformation to the one with linguistically informed post-edits on all languages with both UD and UniMorph data. We then evaluate the recall of MSDs without partial credit.
Because the UniMorph tables only possess annotations for verbs, nouns, adjectives, or some combination, we can only examine performance for these parts of speech. We consider two words to be a match if their form and lemma are present in both resources. Syncretism allows a single surface form to realize multiple MSDs (Spanish mandaba can be first- or third-person), so we define success as the computed MSD matching any of the word's UniMorph MSDs. This gives rise to an equation for recall: of the word–lemma pairs found in both resources, how many of their UniMorph-converted MSDs are present in the UniMorph tables?
Our problem here is not a learning problem, so the question is ill-posed. There is no training set, and the two resources for a given language make up a test set. The quality of our model—the conversion tool—comes from how well we encode prior knowledge about the relationship between the UD and UniMorph corpora.
Extrinsic evaluation
If the UniMorph-converted treebanks perform differently on downstream tasks, then they convey different information. This signals a failure of the conversion process. As a downstream task, we choose morphological tagging, a critical step to leveraging morphological information on new text.
We evaluate taggers trained on the transformed UD data, choosing eight languages randomly from the intersection of UD and UniMorph resources. We report the macro-averaged F1 score of attribute-value pairs on a held-out test set, with official train/validation/test splits provided in the UD treebanks. As a reference point, we also report tagging accuracy on those languages' untransformed data.
We use the state-of-the-art morphological tagger of BIBREF0 . It is a factored conditional random field with potentials for each attribute, attribute pair, and attribute transition. The potentials are computed by neural networks, predicting the values of each attribute jointly but not monolithically. Inference with the potentials is performed approximately by loopy belief propagation. We use the authors' hyperparameters.
We note a minor implementation detail for the sake of reproducibility. The tagger exploits explicit guidance about the attribute each value pertains to. The UniMorph schema's values are globally unique, but their attributes are not explicit. For example, the UniMorph Masc denotes a masculine gender. We amend the code of BIBREF0 to incorporate attribute identifiers for each UniMorph value.
Results
We present the intrinsic task's recall scores in tab:recall. Bear in mind that due to annotation errors in the original corpora (like the vas example from sec:resources), the optimal score is not always $100\%$ . Some shortcomings of recall come from irremediable annotation discrepancies. Largely, we are hamstrung by differences in choice of attributes to annotate. When one resource marks gender and the other marks case, we can't infer the gender of the word purely from its surface form. The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them: Arabic, Hindi, Lithuanian, Persian, and Russian. A full list of observed, irremediable discrepancies is presented alongside the codebase.
There are three other transformations for which we note no improvement here. Because of the problem in Basque argument encoding in the UniMorph dataset—which only contains verbs—we note no improvement in recall on Basque. Irish also does not improve: UD marks gender on nouns, while UniMorph marks case. Adjectives in UD are also underspecified. The verbs, though, are already correct with the simple mapping. Finally, with Dutch, the UD annotations are impoverished compared to the UniMorph annotations, and missing attributes cannot be inferred without external knowledge.
For the extrinsic task, the performance is reasonably similar whether UniMorph or UD; see tab:tagging. A large fluctuation would suggest that the two annotations encode distinct information. On the contrary, the similarities suggest that the UniMorph-mapped MSDs have similar content. We recognize that in every case, tagging F1 increased—albeit by amounts as small as $0.16$ points. This is in part due to the information that is lost in the conversion. UniMorph's schema does not indicate the type of pronoun (demonstrative, interrogative, etc.), and when lexical information is not recorded in UniMorph, we delete it from the MSD during transformation. On the other hand, UniMorph's atomic tags have more parts to guess, but they are often related. (E.g. Ipfv always entails Pst in Spanish.) Altogether, these forces seem to have little impact on tagging performance.
Related Work
The goal of a tagset-to-tagset mapping of morphological annotations is shared by the Interset project BIBREF28 . Interset decodes features in the source corpus to a tag interlingua, then encodes that into target corpus features. (The idea of an interlingua is drawn from machine translation, where a prevailing early mindset was to convert to a universal representation, then encode that representation's semantics in the target language. Our approach, by contrast, is a direct flight from the source to the target.) Because UniMorph corpora are noisy, the encoding from the interlingua would have to be rewritten for each target. Further, decoding the UD MSD into the interlingua cannot leverage external information like the lemma and form.
The creators of HamleDT sought to harmonize dependency annotations among treebanks, similar to our goal of harmonizing across resources BIBREF29 . The treebanks they sought to harmonize used multiple diverse annotation schemes, which the authors unified under a single scheme.
BIBREF30 present mappings into a coarse, universal part of speech for 22 languages. Working with POS tags rather than morphological tags (which have far more dimensions), their space of options to harmonize is much smaller than ours.
Our extrinsic evaluation is most in line with the paradigm of BIBREF31 (and similar work therein), who compare syntactic parser performance on UD treebanks annotated with two styles of dependency representation. Our problem differs, though, in that the dependency representations express different relationships, while our two schemata vastly overlap. As our conversion is lossy, we do not appraise the learnability of representations as they did.
In addition to using the number of extra rules as a proxy for harmony between resources, one could perform cross-lingual projection of morphological tags BIBREF32 , BIBREF33 . Our approach succeeds even without parallel corpora.
Conclusion and Future Work
We created a tool for annotating Universal Dependencies CoNLL-U files with UniMorph annotations. Our tool is ready to use off-the-shelf today, requires no training, and is deterministic. While under-specification necessitates a lossy and imperfect conversion, ours is interpretable. Patterns of mistakes can be identified and ameliorated.
The tool allows a bridge between resources annotated in the Universal Dependencies and Universal Morphology (UniMorph) schemata. As the Universal Dependencies project provides a set of treebanks with token-level annotation, while the UniMorph project releases type-level annotated tables, the newfound compatibility opens up new experiments. A prime example of exploiting token- and type-level data is BIBREF34 . That work presents a part-of-speech (POS) dictionary built from Wiktionary, where the POS tagger is also constrained to options available in their type-level POS dictionary, improving performance. Our transformation means that datasets are prepared for similar experiments with morphological tagging. It would also be reasonable to incorporate this tool as a subroutine to UDPipe BIBREF35 and Udapi BIBREF36 . We leave open the task of converting in the opposite direction, turning UniMorph MSDs into Universal Dependencies MSDs.
Because our conversion rules are interpretable, we identify shortcomings in both resources, using each as validation for the other. We were able to find specific instances of incorrectly applied UniMorph annotation, as well as specific instances of cross-lingual inconsistency in both resources. These findings will harden both resources and better align them with their goal of universal, cross-lingual annotation.
Acknowledgments
We thank Hajime Senuma and John Sylak-Glassman for early comments in devising the starting language-independent mapping from Universal Dependencies to UniMorph. | Yes |
564dcaf8d0bcc274ab64c784e4c0f50d7a2c17ee | 564dcaf8d0bcc274ab64c784e4c0f50d7a2c17ee_0 | Q: Which languages do they validate on?
Text: Introduction
The two largest standardized, cross-lingual datasets for morphological annotation are provided by the Universal Dependencies BIBREF1 and Universal Morphology BIBREF2 , BIBREF3 projects. Each project's data are annotated according to its own cross-lingual schema, prescribing how features like gender or case should be marked. The schemata capture largely similar information, so one may want to leverage both UD's token-level treebanks and UniMorph's type-level lookup tables and unify the two resources. This would permit a leveraging of both the token-level UD treebanks and the type-level UniMorph tables of paradigms. Unfortunately, neither resource perfectly realizes its schema. On a dataset-by-dataset basis, they incorporate annotator errors, omissions, and human decisions when the schemata are underspecified; one such example is in fig:disagreement.
A dataset-by-dataset problem demands a dataset-by-dataset solution; our task is not to translate a schema, but to translate a resource. Starting from the idealized schema, we create a rule-based tool for converting UD-schema annotations to UniMorph annotations, incorporating language-specific post-edits that both correct infelicities and also increase harmony between the datasets themselves (rather than the schemata). We apply this conversion to the 31 languages with both UD and UniMorph data, and we report our method's recall, showing an improvement over the strategy which just maps corresponding schematic features to each other. Further, we show similar downstream performance for each annotation scheme in the task of morphological tagging.
This tool enables a synergistic use of UniMorph and Universal Dependencies, as well as teasing out the annotation discrepancies within and across projects. When one dataset disobeys its schema or disagrees with a related language, the flaws may not be noticed except by such a methodological dive into the resources. When the maintainers of the resources ameliorate these flaws, the resources move closer to the goal of a universal, cross-lingual inventory of features for morphological annotation.
The contributions of this work are:
Background: Morphological Inflection
Morphological inflection is the act of altering the base form of a word (the lemma, represented in fixed-width type) to encode morphosyntactic features. As an example from English, prove takes on the form proved to indicate that the action occurred in the past. (We will represent all surface forms in quotation marks.) The process occurs in the majority of the world's widely-spoken languages, typically through meaningful affixes. The breadth of forms created by inflection creates a challenge of data sparsity for natural language processing: The likelihood of observing a particular word form diminishes.
A classic result in psycholinguistics BIBREF4 shows that inflectional morphology is a fully productive process. Indeed, it cannot be that humans simply have the equivalent of a lookup table, where they store the inflected forms for retrieval as the syntactic context requires. Instead, there needs to be a mental process that can generate properly inflected words on demand. BIBREF4 showed this insightfully through the wug-test, an experiment where she forced participants to correctly inflect out-of-vocabulary lemmata, such as the novel noun wug.
Certain features of a word do not vary depending on its context: In German or Spanish where nouns are gendered, the word for onion will always be grammatically feminine. Thus, to prepare for later discussion, we divide the morphological features of a word into two categories: the modifiable inflectional features and the fixed lexical features.
A part of speech (POS) is a coarse syntactic category (like verb) that begets a word's particular menu of lexical and inflectional features. In English, verbs express no gender, and adjectives do not reflect person or number. The part of speech dictates a set of inflectional slots to be filled by the surface forms. Completing these slots for a given lemma and part of speech gives a paradigm: a mapping from slots to surface forms. Regular English verbs have five slots in their paradigm BIBREF5 , which we illustrate for the verb prove, using simple labels for the forms in tab:ptb.
A morphosyntactic schema prescribes how language can be annotated—giving stricter categories than our simple labels for prove—and can vary in the level of detail provided. Part of speech tags are an example of a very coarse schema, ignoring details of person, gender, and number. A slightly finer-grained schema for English is the Penn Treebank tagset BIBREF6 , which includes signals for English morphology. For instance, its VBZ tag pertains to the specially inflected 3rd-person singular, present-tense verb form (e.g. proves in tab:ptb).
If the tag in a schema is detailed enough that it exactly specifies a slot in a paradigm, it is called a morphosyntactic description (MSD). These descriptions require varying amounts of detail: While the English verbal paradigm is small enough to fit on a page, the verbal paradigm of the Northeast Caucasian language Archi can have over 1500000 slots BIBREF7 .
Two Schemata, Two Philosophies
Unlike the Penn Treebank tags, the UD and UniMorph schemata are cross-lingual and include a fuller lexicon of attribute-value pairs, such as Person: 1. Each was built according to a different set of principles. UD's schema is constructed bottom-up, adapting to include new features when they're identified in languages. UniMorph, conversely, is top-down: A cross-lingual survey of the literature of morphological phenomena guided its design. UniMorph aims to be linguistically complete, containing all known morphosyntactic attributes. Both schemata share one long-term goal: a total inventory for annotating the possible morphosyntactic features of a word.
Universal Dependencies
The Universal Dependencies morphological schema comprises part of speech and 23 additional attributes (also called features in UD) annotating meaning or syntax, as well as language-specific attributes. In order to ensure consistent annotation, attributes are included into the general UD schema if they occur in several corpora. Language-specific attributes are used when only one corpus annotates for a specific feature.
The UD schema seeks to balance language-specific and cross-lingual concerns. It annotates for both inflectional features such as case and lexical features such as gender. Additionally, the UD schema annotates for features which can be interpreted as derivational in some languages. For example, the Czech UD guidance uses a Coll value for the Number feature to denote mass nouns (for example, "lidstvo" "humankind" from the root "lid" "people").
UD represents a confederation of datasets BIBREF8 annotated with dependency relationships (which are not the focus of this work) and morphosyntactic descriptions. Each dataset is an annotated treebank, making it a resource of token-level annotations. The schema is guided by these treebanks, with feature names chosen for relevance to native speakers. (In sec:unimorph, we will contrast this with UniMorph's treatment of morphosyntactic categories.) The UD datasets have been used in the CoNLL shared tasks BIBREF9 .
UniMorph
In the Universal Morphological Feature Schema BIBREF10 , there are at least 212 values, spread across 23 attributes. It identifies some attributes that UD excludes like information structure and deixis, as well as providing more values for certain attributes, like 23 different noun classes endemic to Bantu languages. As it is a schema for marking morphology, its part of speech attribute does not have POS values for punctuation, symbols, or miscellany (Punct, Sym, and X in Universal Dependencies).
Like the UD schema, the decomposition of a word into its lemma and MSD is directly comparable across languages. Its features are informed by a distinction between universal categories, which are widespread and psychologically real to speakers; and comparative concepts, only used by linguistic typologists to compare languages BIBREF11 . Additionally, it strives for identity of meaning across languages, not simply similarity of terminology. As a prime example, it does not regularly label a dative case for nouns, for reasons explained in depth by BIBREF11 .
The UniMorph resources for a language contain complete paradigms extracted from Wiktionary BIBREF12 , BIBREF13 . Word types are annotated to form a database, mapping a lemma–tag pair to a surface form. The schema is explained in detail in BIBREF10 . It has been used in the SIGMORPHON shared task BIBREF14 and the CoNLL–SIGMORPHON shared tasks BIBREF15 , BIBREF16 . Several components of the UniMorph schema have been adopted by UD.
Similarities in the annotation
While the two schemata annotate different features, their annotations often look largely similar. Consider the attested annotation of the Spanish word mandaba (I/he/she/it) commanded. tab:annotations shows that these annotations share many attributes.
Some conversions are straightforward: VERB to V, Mood=Ind to IND, Number=Sing to SG, and Person=3 to 3. One might also suggest mapping Tense=Imp to IPFV, though this crosses semantic categories: IPFV represents the imperfective aspect, whereas Tense=Imp comes from imperfect, the English name often given to Spanish's pasado continuo form. The imperfect is a verb form which combines both past tense and imperfective aspect. UniMorph chooses to split this into the atoms PST and IPFV, while UD unifies them according to the familiar name of the tense.
UD treebanks and UniMorph tables
Prima facie, the alignment task may seem trivial. But we've yet to explore the humans in the loop. This conversion is a hard problem because we're operating on idealized schemata. We're actually annotating human decisions—and human mistakes. If both schemata were perfectly applied, their overlapping attributes could be mapped to each other simply, in a cross-lingual and totally general way. Unfortunately, the resources are imperfect realizations of their schemata. The cross-lingual, cross-resource, and within-resource problems that we'll note mean that we need a tailor-made solution for each language.
Showcasing their schemata, the Universal Dependencies and UniMorph projects each present large, annotated datasets. UD's v2.1 release BIBREF1 has 102 treebanks in 60 languages. The large resource, constructed by independent parties, evinces problems in the goal of a universal inventory of annotations. Annotators may choose to omit certain values (like the coerced gender of refrescante in fig:disagreement), and they may disagree on how a linguistic concept is encoded. (See, e.g., BIBREF11 's ( BIBREF11 ) description of the dative case.) Additionally, many of the treebanks were created by fully- or semi-automatic conversion from treebanks with less comprehensive annotation schemata than UD BIBREF0 . For instance, the Spanish word vas you go is incorrectly labeled Gender: Fem|Number: Pl because it ends in a character sequence which is common among feminine plural nouns. (Nevertheless, the part of speech field for vas is correct.)
UniMorph's development is more centralized and pipelined. Inflectional paradigms are scraped from Wiktionary, annotators map positions in the scraped data to MSDs, and the mapping is automatically applied to all of the scraped paradigms. Because annotators handle languages they are familiar with (or related ones), realization of the schema is also done on a language-by-language basis. Further, the scraping process does not capture lexical aspects that are not inflected, like noun gender in many languages. The schema permits inclusion of these details; their absence is an artifact of the data collection process. Finally, UniMorph records only exist for nouns, verbs, and adjectives, though the schema is broader than these categories.
For these reasons, we treat the corpora as imperfect realizations of the schemata. Moreover, we contend that ambiguity in the schemata leave the door open to allow for such imperfections. With no strict guidance, it's natural that annotators would take different paths. Nevertheless, modulo annotator disagreement, we assume that within a particular corpus, one word form will always be consistently annotated.
Three categories of annotation difficulty are missing values, language-specific attributes, and multiword expressions.
A Deterministic Conversion
In our work, the goal is not simply to translate one schema into the other, but to translate one resource (the imperfect manifestation of the schema) to match the other. The differences between the schemata and discrepancies in annotation mean that the transformation of annotations from one schema to the other is not straightforward.
Two naive options for the conversion are a lookup table of MSDs and a lookup table of the individual attribute-value pairs which comprise the MSDs. The former is untenable: the table of all UD feature combinations (including null features, excluding language-specific attributes) would have 2.445e17 entries. Of course, most combinations won't exist, but this gives a sense of the table's scale. Also, it doesn't leverage the factorial nature of the annotations: constructing the table would require a massive duplication of effort. On the other hand, attribute-value lookup lacks the flexibility to show how a pair of values interacts. Neither approach would handle language- and annotator-specific tendencies in the corpora.
Our approach to converting UD MSDs to UniMorph MSDs begins with the attribute-value lookup, then amends it on a language-specific basis. Alterations informed by the MSD and the word form, like insertion, substitution, and deletion, increase the number of agreeing annotations. They are critical for work that examines the MSD monolithically instead of feature-by-feature BIBREF25 , BIBREF26 : Without exact matches, converting the individual tags becomes hollow.
Beginning our process, we relied on documentation of the two schemata to create our initial, language-agnostic mapping of individual values. This mapping has 140 pairs in it. Because the mapping was derived purely from the schemata, it is a useful approximation of how well the schemata match up. We note, however, that the mapping does not handle idiosyncrasies like the many uses of dative or features which are represented in UniMorph by argument templates: possession and ergative–absolutive argument marking. The initial step of our conversion is using this mapping to populate a proposed UniMorph MSD.
As shown in sec:results, the initial proposal is often frustratingly deficient. Thus we introduce the post-edits. To concoct these, we looked into UniMorph corpora for these languages, compared these to the conversion outputs, and then sought to bring the conversion outputs closer to the annotations in the actual UniMorph corpora. When a form and its lemma existed in both corpora, we could directly inspect how the annotations differed. Our process of iteratively refining the conversion implies a table which exactly maps any combination of UD MSD and its related values (lemma, form, etc.) to a UniMorph MSD, though we do not store the table explicitly.
Some conversion rules we've created must be applied before or after others. These sequential dependencies provide conciseness. Our post-editing procedure operates on the initial MSD hypothesis as follows:
Experiments
We evaluate our tool on two tasks:
To be clear, our scope is limited to the schema conversion. Future work will explore NLP tasks that exploit both the created token-level UniMorph data and the existing type-level UniMorph data.
Intrinsic evaluation
We transform all UD data to the UniMorph. We compare the simple lookup-based transformation to the one with linguistically informed post-edits on all languages with both UD and UniMorph data. We then evaluate the recall of MSDs without partial credit.
Because the UniMorph tables only possess annotations for verbs, nouns, adjectives, or some combination, we can only examine performance for these parts of speech. We consider two words to be a match if their form and lemma are present in both resources. Syncretism allows a single surface form to realize multiple MSDs (Spanish mandaba can be first- or third-person), so we define success as the computed MSD matching any of the word's UniMorph MSDs. This gives rise to an equation for recall: of the word–lemma pairs found in both resources, how many of their UniMorph-converted MSDs are present in the UniMorph tables?
Our problem here is not a learning problem, so the question is ill-posed. There is no training set, and the two resources for a given language make up a test set. The quality of our model—the conversion tool—comes from how well we encode prior knowledge about the relationship between the UD and UniMorph corpora.
Extrinsic evaluation
If the UniMorph-converted treebanks perform differently on downstream tasks, then they convey different information. This signals a failure of the conversion process. As a downstream task, we choose morphological tagging, a critical step to leveraging morphological information on new text.
We evaluate taggers trained on the transformed UD data, choosing eight languages randomly from the intersection of UD and UniMorph resources. We report the macro-averaged F1 score of attribute-value pairs on a held-out test set, with official train/validation/test splits provided in the UD treebanks. As a reference point, we also report tagging accuracy on those languages' untransformed data.
We use the state-of-the-art morphological tagger of BIBREF0 . It is a factored conditional random field with potentials for each attribute, attribute pair, and attribute transition. The potentials are computed by neural networks, predicting the values of each attribute jointly but not monolithically. Inference with the potentials is performed approximately by loopy belief propagation. We use the authors' hyperparameters.
We note a minor implementation detail for the sake of reproducibility. The tagger exploits explicit guidance about the attribute each value pertains to. The UniMorph schema's values are globally unique, but their attributes are not explicit. For example, the UniMorph Masc denotes a masculine gender. We amend the code of BIBREF0 to incorporate attribute identifiers for each UniMorph value.
Results
We present the intrinsic task's recall scores in tab:recall. Bear in mind that due to annotation errors in the original corpora (like the vas example from sec:resources), the optimal score is not always $100\%$ . Some shortcomings of recall come from irremediable annotation discrepancies. Largely, we are hamstrung by differences in choice of attributes to annotate. When one resource marks gender and the other marks case, we can't infer the gender of the word purely from its surface form. The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them: Arabic, Hindi, Lithuanian, Persian, and Russian. A full list of observed, irremediable discrepancies is presented alongside the codebase.
There are three other transformations for which we note no improvement here. Because of the problem in Basque argument encoding in the UniMorph dataset—which only contains verbs—we note no improvement in recall on Basque. Irish also does not improve: UD marks gender on nouns, while UniMorph marks case. Adjectives in UD are also underspecified. The verbs, though, are already correct with the simple mapping. Finally, with Dutch, the UD annotations are impoverished compared to the UniMorph annotations, and missing attributes cannot be inferred without external knowledge.
For the extrinsic task, the performance is reasonably similar whether UniMorph or UD; see tab:tagging. A large fluctuation would suggest that the two annotations encode distinct information. On the contrary, the similarities suggest that the UniMorph-mapped MSDs have similar content. We recognize that in every case, tagging F1 increased—albeit by amounts as small as $0.16$ points. This is in part due to the information that is lost in the conversion. UniMorph's schema does not indicate the type of pronoun (demonstrative, interrogative, etc.), and when lexical information is not recorded in UniMorph, we delete it from the MSD during transformation. On the other hand, UniMorph's atomic tags have more parts to guess, but they are often related. (E.g. Ipfv always entails Pst in Spanish.) Altogether, these forces seem to have little impact on tagging performance.
Related Work
The goal of a tagset-to-tagset mapping of morphological annotations is shared by the Interset project BIBREF28 . Interset decodes features in the source corpus to a tag interlingua, then encodes that into target corpus features. (The idea of an interlingua is drawn from machine translation, where a prevailing early mindset was to convert to a universal representation, then encode that representation's semantics in the target language. Our approach, by contrast, is a direct flight from the source to the target.) Because UniMorph corpora are noisy, the encoding from the interlingua would have to be rewritten for each target. Further, decoding the UD MSD into the interlingua cannot leverage external information like the lemma and form.
The creators of HamleDT sought to harmonize dependency annotations among treebanks, similar to our goal of harmonizing across resources BIBREF29 . The treebanks they sought to harmonize used multiple diverse annotation schemes, which the authors unified under a single scheme.
BIBREF30 present mappings into a coarse, universal part of speech for 22 languages. Working with POS tags rather than morphological tags (which have far more dimensions), their space of options to harmonize is much smaller than ours.
Our extrinsic evaluation is most in line with the paradigm of BIBREF31 (and similar work therein), who compare syntactic parser performance on UD treebanks annotated with two styles of dependency representation. Our problem differs, though, in that the dependency representations express different relationships, while our two schemata vastly overlap. As our conversion is lossy, we do not appraise the learnability of representations as they did.
In addition to using the number of extra rules as a proxy for harmony between resources, one could perform cross-lingual projection of morphological tags BIBREF32 , BIBREF33 . Our approach succeeds even without parallel corpora.
Conclusion and Future Work
We created a tool for annotating Universal Dependencies CoNLL-U files with UniMorph annotations. Our tool is ready to use off-the-shelf today, requires no training, and is deterministic. While under-specification necessitates a lossy and imperfect conversion, ours is interpretable. Patterns of mistakes can be identified and ameliorated.
The tool allows a bridge between resources annotated in the Universal Dependencies and Universal Morphology (UniMorph) schemata. As the Universal Dependencies project provides a set of treebanks with token-level annotation, while the UniMorph project releases type-level annotated tables, the newfound compatibility opens up new experiments. A prime example of exploiting token- and type-level data is BIBREF34 . That work presents a part-of-speech (POS) dictionary built from Wiktionary, where the POS tagger is also constrained to options available in their type-level POS dictionary, improving performance. Our transformation means that datasets are prepared for similar experiments with morphological tagging. It would also be reasonable to incorporate this tool as a subroutine to UDPipe BIBREF35 and Udapi BIBREF36 . We leave open the task of converting in the opposite direction, turning UniMorph MSDs into Universal Dependencies MSDs.
Because our conversion rules are interpretable, we identify shortcomings in both resources, using each as validation for the other. We were able to find specific instances of incorrectly applied UniMorph annotation, as well as specific instances of cross-lingual inconsistency in both resources. These findings will harden both resources and better align them with their goal of universal, cross-lingual annotation.
Acknowledgments
We thank Hajime Senuma and John Sylak-Glassman for early comments in devising the starting language-independent mapping from Universal Dependencies to UniMorph. | Ar, Bg, Ca, Cs, Da, De, En, Es, Eu, Fa, Fi, Fr, Ga, He, Hi, Hu, It, La, Lt, Lv, Nb, Nl, Nn, PL, Pt, Ro, Ru, Sl, Sv, Tr, Uk, Ur |
564dcaf8d0bcc274ab64c784e4c0f50d7a2c17ee | 564dcaf8d0bcc274ab64c784e4c0f50d7a2c17ee_1 | Q: Which languages do they validate on?
Text: Introduction
The two largest standardized, cross-lingual datasets for morphological annotation are provided by the Universal Dependencies BIBREF1 and Universal Morphology BIBREF2 , BIBREF3 projects. Each project's data are annotated according to its own cross-lingual schema, prescribing how features like gender or case should be marked. The schemata capture largely similar information, so one may want to leverage both UD's token-level treebanks and UniMorph's type-level lookup tables and unify the two resources. This would permit a leveraging of both the token-level UD treebanks and the type-level UniMorph tables of paradigms. Unfortunately, neither resource perfectly realizes its schema. On a dataset-by-dataset basis, they incorporate annotator errors, omissions, and human decisions when the schemata are underspecified; one such example is in fig:disagreement.
A dataset-by-dataset problem demands a dataset-by-dataset solution; our task is not to translate a schema, but to translate a resource. Starting from the idealized schema, we create a rule-based tool for converting UD-schema annotations to UniMorph annotations, incorporating language-specific post-edits that both correct infelicities and also increase harmony between the datasets themselves (rather than the schemata). We apply this conversion to the 31 languages with both UD and UniMorph data, and we report our method's recall, showing an improvement over the strategy which just maps corresponding schematic features to each other. Further, we show similar downstream performance for each annotation scheme in the task of morphological tagging.
This tool enables a synergistic use of UniMorph and Universal Dependencies, as well as teasing out the annotation discrepancies within and across projects. When one dataset disobeys its schema or disagrees with a related language, the flaws may not be noticed except by such a methodological dive into the resources. When the maintainers of the resources ameliorate these flaws, the resources move closer to the goal of a universal, cross-lingual inventory of features for morphological annotation.
The contributions of this work are:
Background: Morphological Inflection
Morphological inflection is the act of altering the base form of a word (the lemma, represented in fixed-width type) to encode morphosyntactic features. As an example from English, prove takes on the form proved to indicate that the action occurred in the past. (We will represent all surface forms in quotation marks.) The process occurs in the majority of the world's widely-spoken languages, typically through meaningful affixes. The breadth of forms created by inflection creates a challenge of data sparsity for natural language processing: The likelihood of observing a particular word form diminishes.
A classic result in psycholinguistics BIBREF4 shows that inflectional morphology is a fully productive process. Indeed, it cannot be that humans simply have the equivalent of a lookup table, where they store the inflected forms for retrieval as the syntactic context requires. Instead, there needs to be a mental process that can generate properly inflected words on demand. BIBREF4 showed this insightfully through the wug-test, an experiment where she forced participants to correctly inflect out-of-vocabulary lemmata, such as the novel noun wug.
Certain features of a word do not vary depending on its context: In German or Spanish where nouns are gendered, the word for onion will always be grammatically feminine. Thus, to prepare for later discussion, we divide the morphological features of a word into two categories: the modifiable inflectional features and the fixed lexical features.
A part of speech (POS) is a coarse syntactic category (like verb) that begets a word's particular menu of lexical and inflectional features. In English, verbs express no gender, and adjectives do not reflect person or number. The part of speech dictates a set of inflectional slots to be filled by the surface forms. Completing these slots for a given lemma and part of speech gives a paradigm: a mapping from slots to surface forms. Regular English verbs have five slots in their paradigm BIBREF5 , which we illustrate for the verb prove, using simple labels for the forms in tab:ptb.
A morphosyntactic schema prescribes how language can be annotated—giving stricter categories than our simple labels for prove—and can vary in the level of detail provided. Part of speech tags are an example of a very coarse schema, ignoring details of person, gender, and number. A slightly finer-grained schema for English is the Penn Treebank tagset BIBREF6 , which includes signals for English morphology. For instance, its VBZ tag pertains to the specially inflected 3rd-person singular, present-tense verb form (e.g. proves in tab:ptb).
If the tag in a schema is detailed enough that it exactly specifies a slot in a paradigm, it is called a morphosyntactic description (MSD). These descriptions require varying amounts of detail: While the English verbal paradigm is small enough to fit on a page, the verbal paradigm of the Northeast Caucasian language Archi can have over 1500000 slots BIBREF7 .
Two Schemata, Two Philosophies
Unlike the Penn Treebank tags, the UD and UniMorph schemata are cross-lingual and include a fuller lexicon of attribute-value pairs, such as Person: 1. Each was built according to a different set of principles. UD's schema is constructed bottom-up, adapting to include new features when they're identified in languages. UniMorph, conversely, is top-down: A cross-lingual survey of the literature of morphological phenomena guided its design. UniMorph aims to be linguistically complete, containing all known morphosyntactic attributes. Both schemata share one long-term goal: a total inventory for annotating the possible morphosyntactic features of a word.
Universal Dependencies
The Universal Dependencies morphological schema comprises part of speech and 23 additional attributes (also called features in UD) annotating meaning or syntax, as well as language-specific attributes. In order to ensure consistent annotation, attributes are included into the general UD schema if they occur in several corpora. Language-specific attributes are used when only one corpus annotates for a specific feature.
The UD schema seeks to balance language-specific and cross-lingual concerns. It annotates for both inflectional features such as case and lexical features such as gender. Additionally, the UD schema annotates for features which can be interpreted as derivational in some languages. For example, the Czech UD guidance uses a Coll value for the Number feature to denote mass nouns (for example, "lidstvo" "humankind" from the root "lid" "people").
UD represents a confederation of datasets BIBREF8 annotated with dependency relationships (which are not the focus of this work) and morphosyntactic descriptions. Each dataset is an annotated treebank, making it a resource of token-level annotations. The schema is guided by these treebanks, with feature names chosen for relevance to native speakers. (In sec:unimorph, we will contrast this with UniMorph's treatment of morphosyntactic categories.) The UD datasets have been used in the CoNLL shared tasks BIBREF9 .
UniMorph
In the Universal Morphological Feature Schema BIBREF10 , there are at least 212 values, spread across 23 attributes. It identifies some attributes that UD excludes like information structure and deixis, as well as providing more values for certain attributes, like 23 different noun classes endemic to Bantu languages. As it is a schema for marking morphology, its part of speech attribute does not have POS values for punctuation, symbols, or miscellany (Punct, Sym, and X in Universal Dependencies).
Like the UD schema, the decomposition of a word into its lemma and MSD is directly comparable across languages. Its features are informed by a distinction between universal categories, which are widespread and psychologically real to speakers; and comparative concepts, only used by linguistic typologists to compare languages BIBREF11 . Additionally, it strives for identity of meaning across languages, not simply similarity of terminology. As a prime example, it does not regularly label a dative case for nouns, for reasons explained in depth by BIBREF11 .
The UniMorph resources for a language contain complete paradigms extracted from Wiktionary BIBREF12 , BIBREF13 . Word types are annotated to form a database, mapping a lemma–tag pair to a surface form. The schema is explained in detail in BIBREF10 . It has been used in the SIGMORPHON shared task BIBREF14 and the CoNLL–SIGMORPHON shared tasks BIBREF15 , BIBREF16 . Several components of the UniMorph schema have been adopted by UD.
Similarities in the annotation
While the two schemata annotate different features, their annotations often look largely similar. Consider the attested annotation of the Spanish word mandaba (I/he/she/it) commanded. tab:annotations shows that these annotations share many attributes.
Some conversions are straightforward: VERB to V, Mood=Ind to IND, Number=Sing to SG, and Person=3 to 3. One might also suggest mapping Tense=Imp to IPFV, though this crosses semantic categories: IPFV represents the imperfective aspect, whereas Tense=Imp comes from imperfect, the English name often given to Spanish's pasado continuo form. The imperfect is a verb form which combines both past tense and imperfective aspect. UniMorph chooses to split this into the atoms PST and IPFV, while UD unifies them according to the familiar name of the tense.
UD treebanks and UniMorph tables
Prima facie, the alignment task may seem trivial. But we've yet to explore the humans in the loop. This conversion is a hard problem because we're operating on idealized schemata. We're actually annotating human decisions—and human mistakes. If both schemata were perfectly applied, their overlapping attributes could be mapped to each other simply, in a cross-lingual and totally general way. Unfortunately, the resources are imperfect realizations of their schemata. The cross-lingual, cross-resource, and within-resource problems that we'll note mean that we need a tailor-made solution for each language.
Showcasing their schemata, the Universal Dependencies and UniMorph projects each present large, annotated datasets. UD's v2.1 release BIBREF1 has 102 treebanks in 60 languages. The large resource, constructed by independent parties, evinces problems in the goal of a universal inventory of annotations. Annotators may choose to omit certain values (like the coerced gender of refrescante in fig:disagreement), and they may disagree on how a linguistic concept is encoded. (See, e.g., BIBREF11 's ( BIBREF11 ) description of the dative case.) Additionally, many of the treebanks were created by fully- or semi-automatic conversion from treebanks with less comprehensive annotation schemata than UD BIBREF0 . For instance, the Spanish word vas you go is incorrectly labeled Gender: Fem|Number: Pl because it ends in a character sequence which is common among feminine plural nouns. (Nevertheless, the part of speech field for vas is correct.)
UniMorph's development is more centralized and pipelined. Inflectional paradigms are scraped from Wiktionary, annotators map positions in the scraped data to MSDs, and the mapping is automatically applied to all of the scraped paradigms. Because annotators handle languages they are familiar with (or related ones), realization of the schema is also done on a language-by-language basis. Further, the scraping process does not capture lexical aspects that are not inflected, like noun gender in many languages. The schema permits inclusion of these details; their absence is an artifact of the data collection process. Finally, UniMorph records only exist for nouns, verbs, and adjectives, though the schema is broader than these categories.
For these reasons, we treat the corpora as imperfect realizations of the schemata. Moreover, we contend that ambiguity in the schemata leave the door open to allow for such imperfections. With no strict guidance, it's natural that annotators would take different paths. Nevertheless, modulo annotator disagreement, we assume that within a particular corpus, one word form will always be consistently annotated.
Three categories of annotation difficulty are missing values, language-specific attributes, and multiword expressions.
A Deterministic Conversion
In our work, the goal is not simply to translate one schema into the other, but to translate one resource (the imperfect manifestation of the schema) to match the other. The differences between the schemata and discrepancies in annotation mean that the transformation of annotations from one schema to the other is not straightforward.
Two naive options for the conversion are a lookup table of MSDs and a lookup table of the individual attribute-value pairs which comprise the MSDs. The former is untenable: the table of all UD feature combinations (including null features, excluding language-specific attributes) would have 2.445e17 entries. Of course, most combinations won't exist, but this gives a sense of the table's scale. Also, it doesn't leverage the factorial nature of the annotations: constructing the table would require a massive duplication of effort. On the other hand, attribute-value lookup lacks the flexibility to show how a pair of values interacts. Neither approach would handle language- and annotator-specific tendencies in the corpora.
Our approach to converting UD MSDs to UniMorph MSDs begins with the attribute-value lookup, then amends it on a language-specific basis. Alterations informed by the MSD and the word form, like insertion, substitution, and deletion, increase the number of agreeing annotations. They are critical for work that examines the MSD monolithically instead of feature-by-feature BIBREF25 , BIBREF26 : Without exact matches, converting the individual tags becomes hollow.
Beginning our process, we relied on documentation of the two schemata to create our initial, language-agnostic mapping of individual values. This mapping has 140 pairs in it. Because the mapping was derived purely from the schemata, it is a useful approximation of how well the schemata match up. We note, however, that the mapping does not handle idiosyncrasies like the many uses of dative or features which are represented in UniMorph by argument templates: possession and ergative–absolutive argument marking. The initial step of our conversion is using this mapping to populate a proposed UniMorph MSD.
As shown in sec:results, the initial proposal is often frustratingly deficient. Thus we introduce the post-edits. To concoct these, we looked into UniMorph corpora for these languages, compared these to the conversion outputs, and then sought to bring the conversion outputs closer to the annotations in the actual UniMorph corpora. When a form and its lemma existed in both corpora, we could directly inspect how the annotations differed. Our process of iteratively refining the conversion implies a table which exactly maps any combination of UD MSD and its related values (lemma, form, etc.) to a UniMorph MSD, though we do not store the table explicitly.
Some conversion rules we've created must be applied before or after others. These sequential dependencies provide conciseness. Our post-editing procedure operates on the initial MSD hypothesis as follows:
Experiments
We evaluate our tool on two tasks:
To be clear, our scope is limited to the schema conversion. Future work will explore NLP tasks that exploit both the created token-level UniMorph data and the existing type-level UniMorph data.
Intrinsic evaluation
We transform all UD data to the UniMorph. We compare the simple lookup-based transformation to the one with linguistically informed post-edits on all languages with both UD and UniMorph data. We then evaluate the recall of MSDs without partial credit.
Because the UniMorph tables only possess annotations for verbs, nouns, adjectives, or some combination, we can only examine performance for these parts of speech. We consider two words to be a match if their form and lemma are present in both resources. Syncretism allows a single surface form to realize multiple MSDs (Spanish mandaba can be first- or third-person), so we define success as the computed MSD matching any of the word's UniMorph MSDs. This gives rise to an equation for recall: of the word–lemma pairs found in both resources, how many of their UniMorph-converted MSDs are present in the UniMorph tables?
Our problem here is not a learning problem, so the question is ill-posed. There is no training set, and the two resources for a given language make up a test set. The quality of our model—the conversion tool—comes from how well we encode prior knowledge about the relationship between the UD and UniMorph corpora.
Extrinsic evaluation
If the UniMorph-converted treebanks perform differently on downstream tasks, then they convey different information. This signals a failure of the conversion process. As a downstream task, we choose morphological tagging, a critical step to leveraging morphological information on new text.
We evaluate taggers trained on the transformed UD data, choosing eight languages randomly from the intersection of UD and UniMorph resources. We report the macro-averaged F1 score of attribute-value pairs on a held-out test set, with official train/validation/test splits provided in the UD treebanks. As a reference point, we also report tagging accuracy on those languages' untransformed data.
We use the state-of-the-art morphological tagger of BIBREF0 . It is a factored conditional random field with potentials for each attribute, attribute pair, and attribute transition. The potentials are computed by neural networks, predicting the values of each attribute jointly but not monolithically. Inference with the potentials is performed approximately by loopy belief propagation. We use the authors' hyperparameters.
We note a minor implementation detail for the sake of reproducibility. The tagger exploits explicit guidance about the attribute each value pertains to. The UniMorph schema's values are globally unique, but their attributes are not explicit. For example, the UniMorph Masc denotes a masculine gender. We amend the code of BIBREF0 to incorporate attribute identifiers for each UniMorph value.
Results
We present the intrinsic task's recall scores in tab:recall. Bear in mind that due to annotation errors in the original corpora (like the vas example from sec:resources), the optimal score is not always $100\%$ . Some shortcomings of recall come from irremediable annotation discrepancies. Largely, we are hamstrung by differences in choice of attributes to annotate. When one resource marks gender and the other marks case, we can't infer the gender of the word purely from its surface form. The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them: Arabic, Hindi, Lithuanian, Persian, and Russian. A full list of observed, irremediable discrepancies is presented alongside the codebase.
There are three other transformations for which we note no improvement here. Because of the problem in Basque argument encoding in the UniMorph dataset—which only contains verbs—we note no improvement in recall on Basque. Irish also does not improve: UD marks gender on nouns, while UniMorph marks case. Adjectives in UD are also underspecified. The verbs, though, are already correct with the simple mapping. Finally, with Dutch, the UD annotations are impoverished compared to the UniMorph annotations, and missing attributes cannot be inferred without external knowledge.
For the extrinsic task, the performance is reasonably similar whether UniMorph or UD; see tab:tagging. A large fluctuation would suggest that the two annotations encode distinct information. On the contrary, the similarities suggest that the UniMorph-mapped MSDs have similar content. We recognize that in every case, tagging F1 increased—albeit by amounts as small as $0.16$ points. This is in part due to the information that is lost in the conversion. UniMorph's schema does not indicate the type of pronoun (demonstrative, interrogative, etc.), and when lexical information is not recorded in UniMorph, we delete it from the MSD during transformation. On the other hand, UniMorph's atomic tags have more parts to guess, but they are often related. (E.g. Ipfv always entails Pst in Spanish.) Altogether, these forces seem to have little impact on tagging performance.
Related Work
The goal of a tagset-to-tagset mapping of morphological annotations is shared by the Interset project BIBREF28 . Interset decodes features in the source corpus to a tag interlingua, then encodes that into target corpus features. (The idea of an interlingua is drawn from machine translation, where a prevailing early mindset was to convert to a universal representation, then encode that representation's semantics in the target language. Our approach, by contrast, is a direct flight from the source to the target.) Because UniMorph corpora are noisy, the encoding from the interlingua would have to be rewritten for each target. Further, decoding the UD MSD into the interlingua cannot leverage external information like the lemma and form.
The creators of HamleDT sought to harmonize dependency annotations among treebanks, similar to our goal of harmonizing across resources BIBREF29 . The treebanks they sought to harmonize used multiple diverse annotation schemes, which the authors unified under a single scheme.
BIBREF30 present mappings into a coarse, universal part of speech for 22 languages. Working with POS tags rather than morphological tags (which have far more dimensions), their space of options to harmonize is much smaller than ours.
Our extrinsic evaluation is most in line with the paradigm of BIBREF31 (and similar work therein), who compare syntactic parser performance on UD treebanks annotated with two styles of dependency representation. Our problem differs, though, in that the dependency representations express different relationships, while our two schemata vastly overlap. As our conversion is lossy, we do not appraise the learnability of representations as they did.
In addition to using the number of extra rules as a proxy for harmony between resources, one could perform cross-lingual projection of morphological tags BIBREF32 , BIBREF33 . Our approach succeeds even without parallel corpora.
Conclusion and Future Work
We created a tool for annotating Universal Dependencies CoNLL-U files with UniMorph annotations. Our tool is ready to use off-the-shelf today, requires no training, and is deterministic. While under-specification necessitates a lossy and imperfect conversion, ours is interpretable. Patterns of mistakes can be identified and ameliorated.
The tool allows a bridge between resources annotated in the Universal Dependencies and Universal Morphology (UniMorph) schemata. As the Universal Dependencies project provides a set of treebanks with token-level annotation, while the UniMorph project releases type-level annotated tables, the newfound compatibility opens up new experiments. A prime example of exploiting token- and type-level data is BIBREF34 . That work presents a part-of-speech (POS) dictionary built from Wiktionary, where the POS tagger is also constrained to options available in their type-level POS dictionary, improving performance. Our transformation means that datasets are prepared for similar experiments with morphological tagging. It would also be reasonable to incorporate this tool as a subroutine to UDPipe BIBREF35 and Udapi BIBREF36 . We leave open the task of converting in the opposite direction, turning UniMorph MSDs into Universal Dependencies MSDs.
Because our conversion rules are interpretable, we identify shortcomings in both resources, using each as validation for the other. We were able to find specific instances of incorrectly applied UniMorph annotation, as well as specific instances of cross-lingual inconsistency in both resources. These findings will harden both resources and better align them with their goal of universal, cross-lingual annotation.
Acknowledgments
We thank Hajime Senuma and John Sylak-Glassman for early comments in devising the starting language-independent mapping from Universal Dependencies to UniMorph. | We apply this conversion to the 31 languages, Arabic, Hindi, Lithuanian, Persian, and Russian. , Dutch, Spanish |
f3d0e6452b8d24b7f9db1fd898d1fbe6cd23f166 | f3d0e6452b8d24b7f9db1fd898d1fbe6cd23f166_0 | Q: Does the paper evaluate any adjustment to improve the predicion accuracy of face and audio features?
Text: Introduction
Automatic emotion recognition is commonly understood as the task of assigning an emotion to a predefined instance, for example an utterance (as audio signal), an image (for instance with a depicted face), or a textual unit (e.g., a transcribed utterance, a sentence, or a Tweet). The set of emotions is often following the original definition by Ekman Ekman1992, which includes anger, fear, disgust, sadness, joy, and surprise, or the extension by Plutchik Plutchik1980 who adds trust and anticipation.
Most work in emotion detection is limited to one modality. Exceptions include Busso2004 and Sebe2005, who investigate multimodal approaches combining speech with facial information. Emotion recognition in speech can utilize semantic features as well BIBREF0. Note that the term “multimodal” is also used beyond the combination of vision, audio, and text. For example, Soleymani2012 use it to refer to the combination of electroencephalogram, pupillary response and gaze distance.
In this paper, we deal with the specific situation of car environments as a testbed for multimodal emotion recognition. This is an interesting environment since it is, to some degree, a controlled environment: Dialogue partners are limited in movement, the degrees of freedom for occurring events are limited, and several sensors which are useful for emotion recognition are already integrated in this setting. More specifically, we focus on emotion recognition from speech events in a dialogue with a human partner and with an intelligent agent.
Also from the application point of view, the domain is a relevant choice: Past research has shown that emotional intelligence is beneficial for human computer interaction. Properly processing emotions in interactions increases the engagement of users and can improve performance when a specific task is to be fulfilled BIBREF1, BIBREF2, BIBREF3, BIBREF4. This is mostly based on the aspect that machines communicating with humans appear to be more trustworthy when they show empathy and are perceived as being natural BIBREF3, BIBREF5, BIBREF4.
Virtual agents play an increasingly important role in the automotive context and the speech modality is increasingly being used in cars due to its potential to limit distraction. It has been shown that adapting the in-car speech interaction system according to the drivers' emotional state can help to enhance security, performance as well as the overall driving experience BIBREF6, BIBREF7.
With this paper, we investigate how each of the three considered modalitites, namely facial expressions, utterances of a driver as an audio signal, and transcribed text contributes to the task of emotion recognition in in-car speech interactions. We focus on the five emotions of joy, insecurity, annoyance, relaxation, and boredom since terms corresponding to so-called fundamental emotions like fear have been shown to be associated to too strong emotional states than being appropriate for the in-car context BIBREF8. Our first contribution is the description of the experimental setup for our data collection. Aiming to provoke specific emotions with situations which can occur in real-world driving scenarios and to induce speech interactions, the study was conducted in a driving simulator. Based on the collected data, we provide baseline predictions with off-the-shelf tools for face and speech emotion recognition and compare them to a neural network-based approach for emotion recognition from text. Our second contribution is the introduction of transfer learning to adapt models trained on established out-of-domain corpora to our use case. We work on German language, therefore the transfer consists of a domain and a language transfer.
Related Work ::: Facial Expressions
A common approach to encode emotions for facial expressions is the facial action coding system FACS BIBREF9, BIBREF10, BIBREF11. As the reliability and reproducability of findings with this method have been critically discussed BIBREF12, the trend has increasingly shifted to perform the recognition directly on images and videos, especially with deep learning. For instance, jung2015joint developed a model which considers temporal geometry features and temporal appearance features from image sequences. kim2016hierarchical propose an ensemble of convolutional neural networks which outperforms isolated networks.
In the automotive domain, FACS is still popular. Ma2017 use support vector machines to distinguish happy, bothered, confused, and concentrated based on data from a natural driving environment. They found that bothered and confused are difficult to distinguish, while happy and concentrated are well identified. Aiming to reduce computational cost, Tews2011 apply a simple feature extraction using four dots in the face defining three facial areas. They analyze the variance of the three facial areas for the recognition of happy, anger and neutral. Ihme2018 aim at detecting frustration in a simulator environment. They induce the emotion with specific scenarios and a demanding secondary task and are able to associate specific face movements according to FACS. Paschero2012 use OpenCV (https://opencv.org/) to detect the eyes and the mouth region and track facial movements. They simulate different lightning conditions and apply a multilayer perceptron for the classification task of Ekman's set of fundamental emotions.
Overall, we found that studies using facial features usually focus on continuous driver monitoring, often in driver-only scenarios. In contrast, our work investigates the potential of emotion recognition during speech interactions.
Related Work ::: Acoustic
Past research on emotion recognition from acoustics mainly concentrates on either feature selection or the development of appropriate classifiers. rao2013emotion as well as ververidis2004automatic compare local and global features in support vector machines. Next to such discriminative approaches, hidden Markov models are well-studied, however, there is no agreement on which feature-based classifier is most suitable BIBREF13. Similar to the facial expression modality, recent efforts on applying deep learning have been increased for acoustic speech processing. For instance, lee2015high use a recurrent neural network and palaz2015analysis apply a convolutional neural network to the raw speech signal. Neumann2017 as well as Trigeorgis2016 analyze the importance of features in the context of deep learning-based emotion recognition.
In the automotive sector, Boril2011 approach the detection of negative emotional states within interactions between driver and co-driver as well as in calls of the driver towards the automated spoken dialogue system. Using real-world driving data, they find that the combination of acoustic features and their respective Gaussian mixture model scores performs best. Schuller2006 collects 2,000 dialog turns directed towards an automotive user interface and investigate the classification of anger, confusion, and neutral. They show that automatic feature generation and feature selection boost the performance of an SVM-based classifier. Further, they analyze the performance under systematically added noise and develop methods to mitigate negative effects. For more details, we refer the reader to the survey by Schuller2018. In this work, we explore the straight-forward application of domain independent software to an in-car scenario without domain-specific adaptations.
Related Work ::: Text
Previous work on emotion analysis in natural language processing focuses either on resource creation or on emotion classification for a specific task and domain. On the side of resource creation, the early and influential work of Pennebaker2015 is a dictionary of words being associated with different psychologically relevant categories, including a subset of emotions. Another popular resource is the NRC dictionary by Mohammad2012b. It contains more than 10000 words for a set of discrete emotion classes. Other resources include WordNet Affect BIBREF14 which distinguishes particular word classes. Further, annotated corpora have been created for a set of different domains, for instance fairy tales BIBREF15, Blogs BIBREF16, Twitter BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, Facebook BIBREF22, news headlines BIBREF23, dialogues BIBREF24, literature BIBREF25, or self reports on emotion events BIBREF26 (see BIBREF27 for an overview).
To automatically assign emotions to textual units, the application of dictionaries has been a popular approach and still is, particularly in domains without annotated corpora. Another approach to overcome the lack of huge amounts of annotated training data in a particular domain or for a specific topic is to exploit distant supervision: use the signal of occurrences of emoticons or specific hashtags or words to automatically label the data. This is sometimes referred to as self-labeling BIBREF21, BIBREF28, BIBREF29, BIBREF30.
A variety of classification approaches have been tested, including SNoW BIBREF15, support vector machines BIBREF16, maximum entropy classification, long short-term memory network, and convolutional neural network models BIBREF18. More recently, the state of the art is the use of transfer learning from noisy annotations to more specific predictions BIBREF29. Still, it has been shown that transferring from one domain to another is challenging, as the way emotions are expressed varies between areas BIBREF27. The approach by Felbo2017 is different to our work as they use a huge noisy data set for pretraining the model while we use small high quality data sets instead.
Recently, the state of the art has also been pushed forward with a set of shared tasks, in which the participants with top results mostly exploit deep learning methods for prediction based on pretrained structures like embeddings or language models BIBREF21, BIBREF31, BIBREF20.
Our work follows this approach and builds up on embeddings with deep learning. Furthermore, we approach the application and adaption of text-based classifiers to the automotive domain with transfer learning.
Data set Collection
The first contribution of this paper is the construction of the AMMER data set which we describe in the following. We focus on the drivers' interactions with both a virtual agent as well as a co-driver. To collect the data in a safe and controlled environment and to be able to consider a variety of predefined driving situations, the study was conducted in a driving simulator.
Data set Collection ::: Study Setup and Design
The study environment consists of a fixed-base driving simulator running Vires's VTD (Virtual Test Drive, v2.2.0) simulation software (https://vires.com/vtd-vires-virtual-test-drive/). The vehicle has an automatic transmission, a steering wheel and gas and brake pedals. We collect data from video, speech and biosignals (Empatica E4 to record heart rate, electrodermal activity, skin temperature, not further used in this paper) and questionnaires. Two RGB cameras are fixed in the vehicle to capture the drivers face, one at the sun shield above the drivers seat and one in the middle of the dashboard. A microphone is placed on the center console. One experimenter sits next to the driver, the other behind the simulator. The virtual agent accompanying the drive is realized as Wizard-of-Oz prototype which enables the experimenter to manually trigger prerecorded voice samples playing trough the in-car speakers and to bring new content to the center screen. Figure FIGREF4 shows the driving simulator.
The experimental setting is comparable to an everyday driving task. Participants are told that the goal of the study is to evaluate and to improve an intelligent driving assistant. To increase the probability of emotions to arise, participants are instructed to reach the destination of the route as fast as possible while following traffic rules and speed limits. They are informed that the time needed for the task would be compared to other participants. The route comprises highways, rural roads, and city streets. A navigation system with voice commands and information on the screen keeps the participants on the predefined track.
To trigger emotion changes in the participant, we use the following events: (i) a car on the right lane cutting off to the left lane when participants try to overtake followed by trucks blocking both lanes with a slow overtaking maneuver (ii) a skateboarder who appears unexpectedly on the street and (iii) participants are praised for reaching the destination unexpectedly quickly in comparison to previous participants.
Based on these events, we trigger three interactions (Table TABREF6 provides examples) with the intelligent agent (Driver-Agent Interactions, D–A). Pretending to be aware of the current situation, e. g., to recognize unusual driving behavior such as strong braking, the agent asks the driver to explain his subjective perception of these events in detail. Additionally, we trigger two more interactions with the intelligent agent at the beginning and at the end of the drive, where participants are asked to describe their mood and thoughts regarding the (upcoming) drive. This results in five interactions between the driver and the virtual agent.
Furthermore, the co-driver asks three different questions during sessions with light traffic and low cognitive demand (Driver-Co-Driver Interactions, D–Co). These questions are more general and non-traffic-related and aim at triggering the participants' memory and fantasy. Participants are asked to describe their last vacation, their dream house and their idea of the perfect job. In sum, there are eight interactions per participant (5 D–A, 3 D–Co).
Data set Collection ::: Procedure
At the beginning of the study, participants were welcomed and the upcoming study procedure was explained. Subsequently, participants signed a consent form and completed a questionnaire to provide demographic information. After that, the co-driving experimenter started with the instruction in the simulator which was followed by a familiarization drive consisting of highway and city driving and covering different driving maneuvers such as tight corners, lane changing and strong braking. Subsequently, participants started with the main driving task. The drive had a duration of 20 minutes containing the eight previously mentioned speech interactions. After the completion of the drive, the actual goal of improving automatic emotional recognition was revealed and a standard emotional intelligence questionnaire, namely the TEIQue-SF BIBREF32, was handed to the participants. Finally, a retrospective interview was conducted, in which participants were played recordings of their in-car interactions and asked to give discrete (annoyance, insecurity, joy, relaxation, boredom, none, following BIBREF8) was well as dimensional (valence, arousal, dominance BIBREF33 on a 11-point scale) emotion ratings for the interactions and the according situations. We only use the discrete class annotations in this paper.
Data set Collection ::: Data Analysis
Overall, 36 participants aged 18 to 64 years ($\mu $=28.89, $\sigma $=12.58) completed the experiment. This leads to 288 interactions, 180 between driver and the agent and 108 between driver and co-driver. The emotion self-ratings from the participants yielded 90 utterances labeled with joy, 26 with annoyance, 49 with insecurity, 9 with boredom, 111 with relaxation and 3 with no emotion. One example interaction per interaction type and emotion is shown in Table TABREF7. For further experiments, we only use joy, annoyance/anger, and insecurity/fear due to the small sample size for boredom and no emotion and under the assumption that relaxation brings little expressivity.
Methods ::: Emotion Recognition from Facial Expressions
We preprocess the visual data by extracting the sequence of images for each interaction from the point where the agent's or the co-driver's question was completely uttered until the driver's response stops. The average length is 16.3 seconds, with the minimum at 2.2s and the maximum at 54.7s. We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions). It delivers frame-by-frame scores ($\in [0;100]$) for discrete emotional states of joy, anger and fear. While joy corresponds directly to our annotation, we map anger to our label annoyance and fear to our label insecurity. The maximal average score across all frames constitutes the overall classification for the video sequence. Frames where the software is not able to detect the face are ignored.
Methods ::: Emotion Recognition from Audio Signal
We extract the audio signal for the same sequence as described for facial expressions and apply an off-the-shelf tool for emotion recognition. The software delivers single classification scores for a set of 24 discrete emotions for the entire utterance. We consider the outputs for the states of joy, anger, and fear, mapping analogously to our classes as for facial expressions. Low-confidence predictions are interpreted as “no emotion”. We accept the emotion with the highest score as the discrete prediction otherwise.
Methods ::: Emotion Recognition from Transcribed Utterances
For the emotion recognition from text, we manually transcribe all utterances of our AMMER study. To exploit existing and available data sets which are larger than the AMMER data set, we develop a transfer learning approach. We use a neural network with an embedding layer (frozen weights, pre-trained on Common Crawl and Wikipedia BIBREF36), a bidirectional LSTM BIBREF37, and two dense layers followed by a soft max output layer. This setup is inspired by BIBREF38. We use a dropout rate of 0.3 in all layers and optimize with Adam BIBREF39 with a learning rate of $10^{-5}$ (These parameters are the same for all further experiments). We build on top of the Keras library with the TensorFlow backend. We consider this setup our baseline model.
We train models on a variety of corpora, namely the common format published by BIBREF27 of the FigureEight (formally known as Crowdflower) data set of social media, the ISEAR data BIBREF40 (self-reported emotional events), and, the Twitter Emotion Corpus (TEC, weakly annotated Tweets with #anger, #disgust, #fear, #happy, #sadness, and #surprise, Mohammad2012). From all corpora, we use instances with labels fear, anger, or joy. These corpora are English, however, we do predictions on German utterances. Therefore, each corpus is preprocessed to German with Google Translate. We remove URLs, user tags (“@Username”), punctuation and hash signs. The distributions of the data sets are shown in Table TABREF12.
To adapt models trained on these data, we apply transfer learning as follows: The model is first trained until convergence on one out-of-domain corpus (only on classes fear, joy, anger for compatibility reasons). Then, the parameters of the bi-LSTM layer are frozen and the remaining layers are further trained on AMMER. This procedure is illustrated in Figure FIGREF13
Results ::: Facial Expressions and Audio
Table TABREF16 shows the confusion matrices for facial and audio emotion recognition on our complete AMMER data set and Table TABREF17 shows the results per class for each method, including facial and audio data and micro and macro averages. The classification from facial expressions yields a macro-averaged $\text{F}_1$ score of 33 % across the three emotions joy, insecurity, and annoyance (P=0.31, R=0.35). While the classification results for joy are promising (R=43 %, P=57 %), the distinction of insecurity and annoyance from the other classes appears to be more challenging.
Regarding the audio signal, we observe a macro $\text{F}_1$ score of 29 % (P=42 %, R=22 %). There is a bias towards negative emotions, which results in a small number of detected joy predictions (R=4 %). Insecurity and annoyance are frequently confused.
Results ::: Text from Transcribed Utterances
The experimental setting for the evaluation of emotion recognition from text is as follows: We evaluate the BiLSTM model in three different experiments: (1) in-domain, (2) out-of-domain and (3) transfer learning. For all experiments we train on the classes anger/annoyance, fear/insecurity and joy. Table TABREF19 shows all results for the comparison of these experimental settings.
Results ::: Text from Transcribed Utterances ::: Experiment 1: In-Domain application
We first set a baseline by validating our models on established corpora. We train the baseline model on 60 % of each data set listed in Table TABREF12 and evaluate that model with 40 % of the data from the same domain (results shown in the column “In-Domain” in Table TABREF19). Excluding AMMER, we achieve an average micro $\text{F}_1$ of 68 %, with best results of F$_1$=73 % on TEC. The model trained on our AMMER corpus achieves an F1 score of 57%. This is most probably due to the small size of this data set and the class bias towards joy, which makes up more than half of the data set. These results are mostly in line with Bostan2018.
Results ::: Text from Transcribed Utterances ::: Experiment 2: Simple Out-Of-Domain application
Now we analyze how well the models trained in Experiment 1 perform when applied to our data set. The results are shown in column “Simple” in Table TABREF19. We observe a clear drop in performance, with an average of F$_1$=48 %. The best performing model is again the one trained on TEC, en par with the one trained on the Figure8 data. The model trained on ISEAR performs second best in Experiment 1, it performs worst in Experiment 2.
Results ::: Text from Transcribed Utterances ::: Experiment 3: Transfer Learning application
To adapt models trained on previously existing data sets to our particular application, the AMMER corpus, we apply transfer learning. Here, we perform leave-one-out cross validation. As pre-trained models we use each model from Experiment 1 and further optimize with the training subset of each crossvalidation iteration of AMMER. The results are shown in the column “Transfer L.” in Table TABREF19. The confusion matrix is also depicted in Table TABREF16.
With this procedure we achieve an average performance of F$_1$=75 %, being better than the results from the in-domain Experiment 1. The best performance of F$_1$=76 % is achieved with the model pre-trained on each data set, except for ISEAR. All transfer learning models clearly outperform their simple out-of-domain counterpart.
To ensure that this performance increase is not only due to the larger data set, we compare these results to training the model without transfer on a corpus consisting of each corpus together with AMMER (again, in leave-one-out crossvalidation). These results are depicted in column “Joint C.”. Thus, both settings, “transfer learning” and “joint corpus” have access to the same information.
The results show an increase in performance in contrast to not using AMMER for training, however, the transfer approach based on partial retraining the model shows a clear improvement for all models (by 7pp for Figure8, 10pp for EmoInt, 8pp for TEC, 13pp for ISEAR) compared to the ”Joint” setup.
Summary & Future Work
We described the creation of the multimodal AMMER data with emotional speech interactions between a driver and both a virtual agent and a co-driver. We analyzed the modalities of facial expressions, acoustics, and transcribed utterances regarding their potential for emotion recognition during in-car speech interactions. We applied off-the-shelf emotion recognition tools for facial expressions and acoustics. For transcribed text, we developed a neural network-based classifier with transfer learning exploiting existing annotated corpora. We find that analyzing transcribed utterances is most promising for classification of the three emotional states of joy, annoyance and insecurity.
Our results for facial expressions indicate that there is potential for the classification of joy, however, the states of annoyance and insecurity are not well recognized. Future work needs to investigate more sophisticated approaches to map frame predictions to sequence predictions. Furthermore, movements of the mouth region during speech interactions might negatively influence the classification from facial expressions. Therefore, the question remains how facial expressions can best contribute to multimodal detection in speech interactions.
Regarding the classification from the acoustic signal, the application of off-the-shelf classifiers without further adjustments seems to be challenging. We find a strong bias towards negative emotional states for our experimental setting. For instance, the personalization of the recognition algorithm (e. g., mean and standard deviation normalization) could help to adapt the classification for specific speakers and thus to reduce this bias. Further, the acoustic environment in the vehicle interior has special properties and the recognition software might need further adaptations.
Our transfer learning-based text classifier shows considerably better results. This is a substantial result in its own, as only one previous method for transfer learning in emotion recognition has been proposed, in which a sentiment/emotion specific source for labels in pre-training has been used, to the best of our knowledge BIBREF29. Other applications of transfer learning from general language models include BIBREF41, BIBREF42. Our approach is substantially different, not being trained on a huge amount of noisy data, but on smaller out-of-domain sets of higher quality. This result suggests that emotion classification systems which work across domains can be developed with reasonable effort.
For a productive application of emotion detection in the context of speech events we conclude that a deployed system might perform best with a speech-to-text module followed by an analysis of the text. Further, in this work, we did not explore an ensemble model or the interaction of different modalities. Thus, future work should investigate the fusion of multiple modalities in a single classifier.
Acknowledgment
We thank Laura-Ana-Maria Bostan for discussions and data set preparations. This research has partially been funded by the German Research Council (DFG), project SEAT (KL 2869/1-1). | No |
9b1d789398f1f1a603e4741a5eee63ccaf0d4a4f | 9b1d789398f1f1a603e4741a5eee63ccaf0d4a4f_0 | Q: How is face and audio data analysis evaluated?
Text: Introduction
Automatic emotion recognition is commonly understood as the task of assigning an emotion to a predefined instance, for example an utterance (as audio signal), an image (for instance with a depicted face), or a textual unit (e.g., a transcribed utterance, a sentence, or a Tweet). The set of emotions is often following the original definition by Ekman Ekman1992, which includes anger, fear, disgust, sadness, joy, and surprise, or the extension by Plutchik Plutchik1980 who adds trust and anticipation.
Most work in emotion detection is limited to one modality. Exceptions include Busso2004 and Sebe2005, who investigate multimodal approaches combining speech with facial information. Emotion recognition in speech can utilize semantic features as well BIBREF0. Note that the term “multimodal” is also used beyond the combination of vision, audio, and text. For example, Soleymani2012 use it to refer to the combination of electroencephalogram, pupillary response and gaze distance.
In this paper, we deal with the specific situation of car environments as a testbed for multimodal emotion recognition. This is an interesting environment since it is, to some degree, a controlled environment: Dialogue partners are limited in movement, the degrees of freedom for occurring events are limited, and several sensors which are useful for emotion recognition are already integrated in this setting. More specifically, we focus on emotion recognition from speech events in a dialogue with a human partner and with an intelligent agent.
Also from the application point of view, the domain is a relevant choice: Past research has shown that emotional intelligence is beneficial for human computer interaction. Properly processing emotions in interactions increases the engagement of users and can improve performance when a specific task is to be fulfilled BIBREF1, BIBREF2, BIBREF3, BIBREF4. This is mostly based on the aspect that machines communicating with humans appear to be more trustworthy when they show empathy and are perceived as being natural BIBREF3, BIBREF5, BIBREF4.
Virtual agents play an increasingly important role in the automotive context and the speech modality is increasingly being used in cars due to its potential to limit distraction. It has been shown that adapting the in-car speech interaction system according to the drivers' emotional state can help to enhance security, performance as well as the overall driving experience BIBREF6, BIBREF7.
With this paper, we investigate how each of the three considered modalitites, namely facial expressions, utterances of a driver as an audio signal, and transcribed text contributes to the task of emotion recognition in in-car speech interactions. We focus on the five emotions of joy, insecurity, annoyance, relaxation, and boredom since terms corresponding to so-called fundamental emotions like fear have been shown to be associated to too strong emotional states than being appropriate for the in-car context BIBREF8. Our first contribution is the description of the experimental setup for our data collection. Aiming to provoke specific emotions with situations which can occur in real-world driving scenarios and to induce speech interactions, the study was conducted in a driving simulator. Based on the collected data, we provide baseline predictions with off-the-shelf tools for face and speech emotion recognition and compare them to a neural network-based approach for emotion recognition from text. Our second contribution is the introduction of transfer learning to adapt models trained on established out-of-domain corpora to our use case. We work on German language, therefore the transfer consists of a domain and a language transfer.
Related Work ::: Facial Expressions
A common approach to encode emotions for facial expressions is the facial action coding system FACS BIBREF9, BIBREF10, BIBREF11. As the reliability and reproducability of findings with this method have been critically discussed BIBREF12, the trend has increasingly shifted to perform the recognition directly on images and videos, especially with deep learning. For instance, jung2015joint developed a model which considers temporal geometry features and temporal appearance features from image sequences. kim2016hierarchical propose an ensemble of convolutional neural networks which outperforms isolated networks.
In the automotive domain, FACS is still popular. Ma2017 use support vector machines to distinguish happy, bothered, confused, and concentrated based on data from a natural driving environment. They found that bothered and confused are difficult to distinguish, while happy and concentrated are well identified. Aiming to reduce computational cost, Tews2011 apply a simple feature extraction using four dots in the face defining three facial areas. They analyze the variance of the three facial areas for the recognition of happy, anger and neutral. Ihme2018 aim at detecting frustration in a simulator environment. They induce the emotion with specific scenarios and a demanding secondary task and are able to associate specific face movements according to FACS. Paschero2012 use OpenCV (https://opencv.org/) to detect the eyes and the mouth region and track facial movements. They simulate different lightning conditions and apply a multilayer perceptron for the classification task of Ekman's set of fundamental emotions.
Overall, we found that studies using facial features usually focus on continuous driver monitoring, often in driver-only scenarios. In contrast, our work investigates the potential of emotion recognition during speech interactions.
Related Work ::: Acoustic
Past research on emotion recognition from acoustics mainly concentrates on either feature selection or the development of appropriate classifiers. rao2013emotion as well as ververidis2004automatic compare local and global features in support vector machines. Next to such discriminative approaches, hidden Markov models are well-studied, however, there is no agreement on which feature-based classifier is most suitable BIBREF13. Similar to the facial expression modality, recent efforts on applying deep learning have been increased for acoustic speech processing. For instance, lee2015high use a recurrent neural network and palaz2015analysis apply a convolutional neural network to the raw speech signal. Neumann2017 as well as Trigeorgis2016 analyze the importance of features in the context of deep learning-based emotion recognition.
In the automotive sector, Boril2011 approach the detection of negative emotional states within interactions between driver and co-driver as well as in calls of the driver towards the automated spoken dialogue system. Using real-world driving data, they find that the combination of acoustic features and their respective Gaussian mixture model scores performs best. Schuller2006 collects 2,000 dialog turns directed towards an automotive user interface and investigate the classification of anger, confusion, and neutral. They show that automatic feature generation and feature selection boost the performance of an SVM-based classifier. Further, they analyze the performance under systematically added noise and develop methods to mitigate negative effects. For more details, we refer the reader to the survey by Schuller2018. In this work, we explore the straight-forward application of domain independent software to an in-car scenario without domain-specific adaptations.
Related Work ::: Text
Previous work on emotion analysis in natural language processing focuses either on resource creation or on emotion classification for a specific task and domain. On the side of resource creation, the early and influential work of Pennebaker2015 is a dictionary of words being associated with different psychologically relevant categories, including a subset of emotions. Another popular resource is the NRC dictionary by Mohammad2012b. It contains more than 10000 words for a set of discrete emotion classes. Other resources include WordNet Affect BIBREF14 which distinguishes particular word classes. Further, annotated corpora have been created for a set of different domains, for instance fairy tales BIBREF15, Blogs BIBREF16, Twitter BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, Facebook BIBREF22, news headlines BIBREF23, dialogues BIBREF24, literature BIBREF25, or self reports on emotion events BIBREF26 (see BIBREF27 for an overview).
To automatically assign emotions to textual units, the application of dictionaries has been a popular approach and still is, particularly in domains without annotated corpora. Another approach to overcome the lack of huge amounts of annotated training data in a particular domain or for a specific topic is to exploit distant supervision: use the signal of occurrences of emoticons or specific hashtags or words to automatically label the data. This is sometimes referred to as self-labeling BIBREF21, BIBREF28, BIBREF29, BIBREF30.
A variety of classification approaches have been tested, including SNoW BIBREF15, support vector machines BIBREF16, maximum entropy classification, long short-term memory network, and convolutional neural network models BIBREF18. More recently, the state of the art is the use of transfer learning from noisy annotations to more specific predictions BIBREF29. Still, it has been shown that transferring from one domain to another is challenging, as the way emotions are expressed varies between areas BIBREF27. The approach by Felbo2017 is different to our work as they use a huge noisy data set for pretraining the model while we use small high quality data sets instead.
Recently, the state of the art has also been pushed forward with a set of shared tasks, in which the participants with top results mostly exploit deep learning methods for prediction based on pretrained structures like embeddings or language models BIBREF21, BIBREF31, BIBREF20.
Our work follows this approach and builds up on embeddings with deep learning. Furthermore, we approach the application and adaption of text-based classifiers to the automotive domain with transfer learning.
Data set Collection
The first contribution of this paper is the construction of the AMMER data set which we describe in the following. We focus on the drivers' interactions with both a virtual agent as well as a co-driver. To collect the data in a safe and controlled environment and to be able to consider a variety of predefined driving situations, the study was conducted in a driving simulator.
Data set Collection ::: Study Setup and Design
The study environment consists of a fixed-base driving simulator running Vires's VTD (Virtual Test Drive, v2.2.0) simulation software (https://vires.com/vtd-vires-virtual-test-drive/). The vehicle has an automatic transmission, a steering wheel and gas and brake pedals. We collect data from video, speech and biosignals (Empatica E4 to record heart rate, electrodermal activity, skin temperature, not further used in this paper) and questionnaires. Two RGB cameras are fixed in the vehicle to capture the drivers face, one at the sun shield above the drivers seat and one in the middle of the dashboard. A microphone is placed on the center console. One experimenter sits next to the driver, the other behind the simulator. The virtual agent accompanying the drive is realized as Wizard-of-Oz prototype which enables the experimenter to manually trigger prerecorded voice samples playing trough the in-car speakers and to bring new content to the center screen. Figure FIGREF4 shows the driving simulator.
The experimental setting is comparable to an everyday driving task. Participants are told that the goal of the study is to evaluate and to improve an intelligent driving assistant. To increase the probability of emotions to arise, participants are instructed to reach the destination of the route as fast as possible while following traffic rules and speed limits. They are informed that the time needed for the task would be compared to other participants. The route comprises highways, rural roads, and city streets. A navigation system with voice commands and information on the screen keeps the participants on the predefined track.
To trigger emotion changes in the participant, we use the following events: (i) a car on the right lane cutting off to the left lane when participants try to overtake followed by trucks blocking both lanes with a slow overtaking maneuver (ii) a skateboarder who appears unexpectedly on the street and (iii) participants are praised for reaching the destination unexpectedly quickly in comparison to previous participants.
Based on these events, we trigger three interactions (Table TABREF6 provides examples) with the intelligent agent (Driver-Agent Interactions, D–A). Pretending to be aware of the current situation, e. g., to recognize unusual driving behavior such as strong braking, the agent asks the driver to explain his subjective perception of these events in detail. Additionally, we trigger two more interactions with the intelligent agent at the beginning and at the end of the drive, where participants are asked to describe their mood and thoughts regarding the (upcoming) drive. This results in five interactions between the driver and the virtual agent.
Furthermore, the co-driver asks three different questions during sessions with light traffic and low cognitive demand (Driver-Co-Driver Interactions, D–Co). These questions are more general and non-traffic-related and aim at triggering the participants' memory and fantasy. Participants are asked to describe their last vacation, their dream house and their idea of the perfect job. In sum, there are eight interactions per participant (5 D–A, 3 D–Co).
Data set Collection ::: Procedure
At the beginning of the study, participants were welcomed and the upcoming study procedure was explained. Subsequently, participants signed a consent form and completed a questionnaire to provide demographic information. After that, the co-driving experimenter started with the instruction in the simulator which was followed by a familiarization drive consisting of highway and city driving and covering different driving maneuvers such as tight corners, lane changing and strong braking. Subsequently, participants started with the main driving task. The drive had a duration of 20 minutes containing the eight previously mentioned speech interactions. After the completion of the drive, the actual goal of improving automatic emotional recognition was revealed and a standard emotional intelligence questionnaire, namely the TEIQue-SF BIBREF32, was handed to the participants. Finally, a retrospective interview was conducted, in which participants were played recordings of their in-car interactions and asked to give discrete (annoyance, insecurity, joy, relaxation, boredom, none, following BIBREF8) was well as dimensional (valence, arousal, dominance BIBREF33 on a 11-point scale) emotion ratings for the interactions and the according situations. We only use the discrete class annotations in this paper.
Data set Collection ::: Data Analysis
Overall, 36 participants aged 18 to 64 years ($\mu $=28.89, $\sigma $=12.58) completed the experiment. This leads to 288 interactions, 180 between driver and the agent and 108 between driver and co-driver. The emotion self-ratings from the participants yielded 90 utterances labeled with joy, 26 with annoyance, 49 with insecurity, 9 with boredom, 111 with relaxation and 3 with no emotion. One example interaction per interaction type and emotion is shown in Table TABREF7. For further experiments, we only use joy, annoyance/anger, and insecurity/fear due to the small sample size for boredom and no emotion and under the assumption that relaxation brings little expressivity.
Methods ::: Emotion Recognition from Facial Expressions
We preprocess the visual data by extracting the sequence of images for each interaction from the point where the agent's or the co-driver's question was completely uttered until the driver's response stops. The average length is 16.3 seconds, with the minimum at 2.2s and the maximum at 54.7s. We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions). It delivers frame-by-frame scores ($\in [0;100]$) for discrete emotional states of joy, anger and fear. While joy corresponds directly to our annotation, we map anger to our label annoyance and fear to our label insecurity. The maximal average score across all frames constitutes the overall classification for the video sequence. Frames where the software is not able to detect the face are ignored.
Methods ::: Emotion Recognition from Audio Signal
We extract the audio signal for the same sequence as described for facial expressions and apply an off-the-shelf tool for emotion recognition. The software delivers single classification scores for a set of 24 discrete emotions for the entire utterance. We consider the outputs for the states of joy, anger, and fear, mapping analogously to our classes as for facial expressions. Low-confidence predictions are interpreted as “no emotion”. We accept the emotion with the highest score as the discrete prediction otherwise.
Methods ::: Emotion Recognition from Transcribed Utterances
For the emotion recognition from text, we manually transcribe all utterances of our AMMER study. To exploit existing and available data sets which are larger than the AMMER data set, we develop a transfer learning approach. We use a neural network with an embedding layer (frozen weights, pre-trained on Common Crawl and Wikipedia BIBREF36), a bidirectional LSTM BIBREF37, and two dense layers followed by a soft max output layer. This setup is inspired by BIBREF38. We use a dropout rate of 0.3 in all layers and optimize with Adam BIBREF39 with a learning rate of $10^{-5}$ (These parameters are the same for all further experiments). We build on top of the Keras library with the TensorFlow backend. We consider this setup our baseline model.
We train models on a variety of corpora, namely the common format published by BIBREF27 of the FigureEight (formally known as Crowdflower) data set of social media, the ISEAR data BIBREF40 (self-reported emotional events), and, the Twitter Emotion Corpus (TEC, weakly annotated Tweets with #anger, #disgust, #fear, #happy, #sadness, and #surprise, Mohammad2012). From all corpora, we use instances with labels fear, anger, or joy. These corpora are English, however, we do predictions on German utterances. Therefore, each corpus is preprocessed to German with Google Translate. We remove URLs, user tags (“@Username”), punctuation and hash signs. The distributions of the data sets are shown in Table TABREF12.
To adapt models trained on these data, we apply transfer learning as follows: The model is first trained until convergence on one out-of-domain corpus (only on classes fear, joy, anger for compatibility reasons). Then, the parameters of the bi-LSTM layer are frozen and the remaining layers are further trained on AMMER. This procedure is illustrated in Figure FIGREF13
Results ::: Facial Expressions and Audio
Table TABREF16 shows the confusion matrices for facial and audio emotion recognition on our complete AMMER data set and Table TABREF17 shows the results per class for each method, including facial and audio data and micro and macro averages. The classification from facial expressions yields a macro-averaged $\text{F}_1$ score of 33 % across the three emotions joy, insecurity, and annoyance (P=0.31, R=0.35). While the classification results for joy are promising (R=43 %, P=57 %), the distinction of insecurity and annoyance from the other classes appears to be more challenging.
Regarding the audio signal, we observe a macro $\text{F}_1$ score of 29 % (P=42 %, R=22 %). There is a bias towards negative emotions, which results in a small number of detected joy predictions (R=4 %). Insecurity and annoyance are frequently confused.
Results ::: Text from Transcribed Utterances
The experimental setting for the evaluation of emotion recognition from text is as follows: We evaluate the BiLSTM model in three different experiments: (1) in-domain, (2) out-of-domain and (3) transfer learning. For all experiments we train on the classes anger/annoyance, fear/insecurity and joy. Table TABREF19 shows all results for the comparison of these experimental settings.
Results ::: Text from Transcribed Utterances ::: Experiment 1: In-Domain application
We first set a baseline by validating our models on established corpora. We train the baseline model on 60 % of each data set listed in Table TABREF12 and evaluate that model with 40 % of the data from the same domain (results shown in the column “In-Domain” in Table TABREF19). Excluding AMMER, we achieve an average micro $\text{F}_1$ of 68 %, with best results of F$_1$=73 % on TEC. The model trained on our AMMER corpus achieves an F1 score of 57%. This is most probably due to the small size of this data set and the class bias towards joy, which makes up more than half of the data set. These results are mostly in line with Bostan2018.
Results ::: Text from Transcribed Utterances ::: Experiment 2: Simple Out-Of-Domain application
Now we analyze how well the models trained in Experiment 1 perform when applied to our data set. The results are shown in column “Simple” in Table TABREF19. We observe a clear drop in performance, with an average of F$_1$=48 %. The best performing model is again the one trained on TEC, en par with the one trained on the Figure8 data. The model trained on ISEAR performs second best in Experiment 1, it performs worst in Experiment 2.
Results ::: Text from Transcribed Utterances ::: Experiment 3: Transfer Learning application
To adapt models trained on previously existing data sets to our particular application, the AMMER corpus, we apply transfer learning. Here, we perform leave-one-out cross validation. As pre-trained models we use each model from Experiment 1 and further optimize with the training subset of each crossvalidation iteration of AMMER. The results are shown in the column “Transfer L.” in Table TABREF19. The confusion matrix is also depicted in Table TABREF16.
With this procedure we achieve an average performance of F$_1$=75 %, being better than the results from the in-domain Experiment 1. The best performance of F$_1$=76 % is achieved with the model pre-trained on each data set, except for ISEAR. All transfer learning models clearly outperform their simple out-of-domain counterpart.
To ensure that this performance increase is not only due to the larger data set, we compare these results to training the model without transfer on a corpus consisting of each corpus together with AMMER (again, in leave-one-out crossvalidation). These results are depicted in column “Joint C.”. Thus, both settings, “transfer learning” and “joint corpus” have access to the same information.
The results show an increase in performance in contrast to not using AMMER for training, however, the transfer approach based on partial retraining the model shows a clear improvement for all models (by 7pp for Figure8, 10pp for EmoInt, 8pp for TEC, 13pp for ISEAR) compared to the ”Joint” setup.
Summary & Future Work
We described the creation of the multimodal AMMER data with emotional speech interactions between a driver and both a virtual agent and a co-driver. We analyzed the modalities of facial expressions, acoustics, and transcribed utterances regarding their potential for emotion recognition during in-car speech interactions. We applied off-the-shelf emotion recognition tools for facial expressions and acoustics. For transcribed text, we developed a neural network-based classifier with transfer learning exploiting existing annotated corpora. We find that analyzing transcribed utterances is most promising for classification of the three emotional states of joy, annoyance and insecurity.
Our results for facial expressions indicate that there is potential for the classification of joy, however, the states of annoyance and insecurity are not well recognized. Future work needs to investigate more sophisticated approaches to map frame predictions to sequence predictions. Furthermore, movements of the mouth region during speech interactions might negatively influence the classification from facial expressions. Therefore, the question remains how facial expressions can best contribute to multimodal detection in speech interactions.
Regarding the classification from the acoustic signal, the application of off-the-shelf classifiers without further adjustments seems to be challenging. We find a strong bias towards negative emotional states for our experimental setting. For instance, the personalization of the recognition algorithm (e. g., mean and standard deviation normalization) could help to adapt the classification for specific speakers and thus to reduce this bias. Further, the acoustic environment in the vehicle interior has special properties and the recognition software might need further adaptations.
Our transfer learning-based text classifier shows considerably better results. This is a substantial result in its own, as only one previous method for transfer learning in emotion recognition has been proposed, in which a sentiment/emotion specific source for labels in pre-training has been used, to the best of our knowledge BIBREF29. Other applications of transfer learning from general language models include BIBREF41, BIBREF42. Our approach is substantially different, not being trained on a huge amount of noisy data, but on smaller out-of-domain sets of higher quality. This result suggests that emotion classification systems which work across domains can be developed with reasonable effort.
For a productive application of emotion detection in the context of speech events we conclude that a deployed system might perform best with a speech-to-text module followed by an analysis of the text. Further, in this work, we did not explore an ensemble model or the interaction of different modalities. Thus, future work should investigate the fusion of multiple modalities in a single classifier.
Acknowledgment
We thank Laura-Ana-Maria Bostan for discussions and data set preparations. This research has partially been funded by the German Research Council (DFG), project SEAT (KL 2869/1-1). | confusion matrices, $\text{F}_1$ score |
00bcdffff7e055f99aaf1b05cf41c98e2748e948 | 00bcdffff7e055f99aaf1b05cf41c98e2748e948_0 | Q: What is the baseline method for the task?
Text: Introduction
Automatic emotion recognition is commonly understood as the task of assigning an emotion to a predefined instance, for example an utterance (as audio signal), an image (for instance with a depicted face), or a textual unit (e.g., a transcribed utterance, a sentence, or a Tweet). The set of emotions is often following the original definition by Ekman Ekman1992, which includes anger, fear, disgust, sadness, joy, and surprise, or the extension by Plutchik Plutchik1980 who adds trust and anticipation.
Most work in emotion detection is limited to one modality. Exceptions include Busso2004 and Sebe2005, who investigate multimodal approaches combining speech with facial information. Emotion recognition in speech can utilize semantic features as well BIBREF0. Note that the term “multimodal” is also used beyond the combination of vision, audio, and text. For example, Soleymani2012 use it to refer to the combination of electroencephalogram, pupillary response and gaze distance.
In this paper, we deal with the specific situation of car environments as a testbed for multimodal emotion recognition. This is an interesting environment since it is, to some degree, a controlled environment: Dialogue partners are limited in movement, the degrees of freedom for occurring events are limited, and several sensors which are useful for emotion recognition are already integrated in this setting. More specifically, we focus on emotion recognition from speech events in a dialogue with a human partner and with an intelligent agent.
Also from the application point of view, the domain is a relevant choice: Past research has shown that emotional intelligence is beneficial for human computer interaction. Properly processing emotions in interactions increases the engagement of users and can improve performance when a specific task is to be fulfilled BIBREF1, BIBREF2, BIBREF3, BIBREF4. This is mostly based on the aspect that machines communicating with humans appear to be more trustworthy when they show empathy and are perceived as being natural BIBREF3, BIBREF5, BIBREF4.
Virtual agents play an increasingly important role in the automotive context and the speech modality is increasingly being used in cars due to its potential to limit distraction. It has been shown that adapting the in-car speech interaction system according to the drivers' emotional state can help to enhance security, performance as well as the overall driving experience BIBREF6, BIBREF7.
With this paper, we investigate how each of the three considered modalitites, namely facial expressions, utterances of a driver as an audio signal, and transcribed text contributes to the task of emotion recognition in in-car speech interactions. We focus on the five emotions of joy, insecurity, annoyance, relaxation, and boredom since terms corresponding to so-called fundamental emotions like fear have been shown to be associated to too strong emotional states than being appropriate for the in-car context BIBREF8. Our first contribution is the description of the experimental setup for our data collection. Aiming to provoke specific emotions with situations which can occur in real-world driving scenarios and to induce speech interactions, the study was conducted in a driving simulator. Based on the collected data, we provide baseline predictions with off-the-shelf tools for face and speech emotion recognition and compare them to a neural network-based approach for emotion recognition from text. Our second contribution is the introduction of transfer learning to adapt models trained on established out-of-domain corpora to our use case. We work on German language, therefore the transfer consists of a domain and a language transfer.
Related Work ::: Facial Expressions
A common approach to encode emotions for facial expressions is the facial action coding system FACS BIBREF9, BIBREF10, BIBREF11. As the reliability and reproducability of findings with this method have been critically discussed BIBREF12, the trend has increasingly shifted to perform the recognition directly on images and videos, especially with deep learning. For instance, jung2015joint developed a model which considers temporal geometry features and temporal appearance features from image sequences. kim2016hierarchical propose an ensemble of convolutional neural networks which outperforms isolated networks.
In the automotive domain, FACS is still popular. Ma2017 use support vector machines to distinguish happy, bothered, confused, and concentrated based on data from a natural driving environment. They found that bothered and confused are difficult to distinguish, while happy and concentrated are well identified. Aiming to reduce computational cost, Tews2011 apply a simple feature extraction using four dots in the face defining three facial areas. They analyze the variance of the three facial areas for the recognition of happy, anger and neutral. Ihme2018 aim at detecting frustration in a simulator environment. They induce the emotion with specific scenarios and a demanding secondary task and are able to associate specific face movements according to FACS. Paschero2012 use OpenCV (https://opencv.org/) to detect the eyes and the mouth region and track facial movements. They simulate different lightning conditions and apply a multilayer perceptron for the classification task of Ekman's set of fundamental emotions.
Overall, we found that studies using facial features usually focus on continuous driver monitoring, often in driver-only scenarios. In contrast, our work investigates the potential of emotion recognition during speech interactions.
Related Work ::: Acoustic
Past research on emotion recognition from acoustics mainly concentrates on either feature selection or the development of appropriate classifiers. rao2013emotion as well as ververidis2004automatic compare local and global features in support vector machines. Next to such discriminative approaches, hidden Markov models are well-studied, however, there is no agreement on which feature-based classifier is most suitable BIBREF13. Similar to the facial expression modality, recent efforts on applying deep learning have been increased for acoustic speech processing. For instance, lee2015high use a recurrent neural network and palaz2015analysis apply a convolutional neural network to the raw speech signal. Neumann2017 as well as Trigeorgis2016 analyze the importance of features in the context of deep learning-based emotion recognition.
In the automotive sector, Boril2011 approach the detection of negative emotional states within interactions between driver and co-driver as well as in calls of the driver towards the automated spoken dialogue system. Using real-world driving data, they find that the combination of acoustic features and their respective Gaussian mixture model scores performs best. Schuller2006 collects 2,000 dialog turns directed towards an automotive user interface and investigate the classification of anger, confusion, and neutral. They show that automatic feature generation and feature selection boost the performance of an SVM-based classifier. Further, they analyze the performance under systematically added noise and develop methods to mitigate negative effects. For more details, we refer the reader to the survey by Schuller2018. In this work, we explore the straight-forward application of domain independent software to an in-car scenario without domain-specific adaptations.
Related Work ::: Text
Previous work on emotion analysis in natural language processing focuses either on resource creation or on emotion classification for a specific task and domain. On the side of resource creation, the early and influential work of Pennebaker2015 is a dictionary of words being associated with different psychologically relevant categories, including a subset of emotions. Another popular resource is the NRC dictionary by Mohammad2012b. It contains more than 10000 words for a set of discrete emotion classes. Other resources include WordNet Affect BIBREF14 which distinguishes particular word classes. Further, annotated corpora have been created for a set of different domains, for instance fairy tales BIBREF15, Blogs BIBREF16, Twitter BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, Facebook BIBREF22, news headlines BIBREF23, dialogues BIBREF24, literature BIBREF25, or self reports on emotion events BIBREF26 (see BIBREF27 for an overview).
To automatically assign emotions to textual units, the application of dictionaries has been a popular approach and still is, particularly in domains without annotated corpora. Another approach to overcome the lack of huge amounts of annotated training data in a particular domain or for a specific topic is to exploit distant supervision: use the signal of occurrences of emoticons or specific hashtags or words to automatically label the data. This is sometimes referred to as self-labeling BIBREF21, BIBREF28, BIBREF29, BIBREF30.
A variety of classification approaches have been tested, including SNoW BIBREF15, support vector machines BIBREF16, maximum entropy classification, long short-term memory network, and convolutional neural network models BIBREF18. More recently, the state of the art is the use of transfer learning from noisy annotations to more specific predictions BIBREF29. Still, it has been shown that transferring from one domain to another is challenging, as the way emotions are expressed varies between areas BIBREF27. The approach by Felbo2017 is different to our work as they use a huge noisy data set for pretraining the model while we use small high quality data sets instead.
Recently, the state of the art has also been pushed forward with a set of shared tasks, in which the participants with top results mostly exploit deep learning methods for prediction based on pretrained structures like embeddings or language models BIBREF21, BIBREF31, BIBREF20.
Our work follows this approach and builds up on embeddings with deep learning. Furthermore, we approach the application and adaption of text-based classifiers to the automotive domain with transfer learning.
Data set Collection
The first contribution of this paper is the construction of the AMMER data set which we describe in the following. We focus on the drivers' interactions with both a virtual agent as well as a co-driver. To collect the data in a safe and controlled environment and to be able to consider a variety of predefined driving situations, the study was conducted in a driving simulator.
Data set Collection ::: Study Setup and Design
The study environment consists of a fixed-base driving simulator running Vires's VTD (Virtual Test Drive, v2.2.0) simulation software (https://vires.com/vtd-vires-virtual-test-drive/). The vehicle has an automatic transmission, a steering wheel and gas and brake pedals. We collect data from video, speech and biosignals (Empatica E4 to record heart rate, electrodermal activity, skin temperature, not further used in this paper) and questionnaires. Two RGB cameras are fixed in the vehicle to capture the drivers face, one at the sun shield above the drivers seat and one in the middle of the dashboard. A microphone is placed on the center console. One experimenter sits next to the driver, the other behind the simulator. The virtual agent accompanying the drive is realized as Wizard-of-Oz prototype which enables the experimenter to manually trigger prerecorded voice samples playing trough the in-car speakers and to bring new content to the center screen. Figure FIGREF4 shows the driving simulator.
The experimental setting is comparable to an everyday driving task. Participants are told that the goal of the study is to evaluate and to improve an intelligent driving assistant. To increase the probability of emotions to arise, participants are instructed to reach the destination of the route as fast as possible while following traffic rules and speed limits. They are informed that the time needed for the task would be compared to other participants. The route comprises highways, rural roads, and city streets. A navigation system with voice commands and information on the screen keeps the participants on the predefined track.
To trigger emotion changes in the participant, we use the following events: (i) a car on the right lane cutting off to the left lane when participants try to overtake followed by trucks blocking both lanes with a slow overtaking maneuver (ii) a skateboarder who appears unexpectedly on the street and (iii) participants are praised for reaching the destination unexpectedly quickly in comparison to previous participants.
Based on these events, we trigger three interactions (Table TABREF6 provides examples) with the intelligent agent (Driver-Agent Interactions, D–A). Pretending to be aware of the current situation, e. g., to recognize unusual driving behavior such as strong braking, the agent asks the driver to explain his subjective perception of these events in detail. Additionally, we trigger two more interactions with the intelligent agent at the beginning and at the end of the drive, where participants are asked to describe their mood and thoughts regarding the (upcoming) drive. This results in five interactions between the driver and the virtual agent.
Furthermore, the co-driver asks three different questions during sessions with light traffic and low cognitive demand (Driver-Co-Driver Interactions, D–Co). These questions are more general and non-traffic-related and aim at triggering the participants' memory and fantasy. Participants are asked to describe their last vacation, their dream house and their idea of the perfect job. In sum, there are eight interactions per participant (5 D–A, 3 D–Co).
Data set Collection ::: Procedure
At the beginning of the study, participants were welcomed and the upcoming study procedure was explained. Subsequently, participants signed a consent form and completed a questionnaire to provide demographic information. After that, the co-driving experimenter started with the instruction in the simulator which was followed by a familiarization drive consisting of highway and city driving and covering different driving maneuvers such as tight corners, lane changing and strong braking. Subsequently, participants started with the main driving task. The drive had a duration of 20 minutes containing the eight previously mentioned speech interactions. After the completion of the drive, the actual goal of improving automatic emotional recognition was revealed and a standard emotional intelligence questionnaire, namely the TEIQue-SF BIBREF32, was handed to the participants. Finally, a retrospective interview was conducted, in which participants were played recordings of their in-car interactions and asked to give discrete (annoyance, insecurity, joy, relaxation, boredom, none, following BIBREF8) was well as dimensional (valence, arousal, dominance BIBREF33 on a 11-point scale) emotion ratings for the interactions and the according situations. We only use the discrete class annotations in this paper.
Data set Collection ::: Data Analysis
Overall, 36 participants aged 18 to 64 years ($\mu $=28.89, $\sigma $=12.58) completed the experiment. This leads to 288 interactions, 180 between driver and the agent and 108 between driver and co-driver. The emotion self-ratings from the participants yielded 90 utterances labeled with joy, 26 with annoyance, 49 with insecurity, 9 with boredom, 111 with relaxation and 3 with no emotion. One example interaction per interaction type and emotion is shown in Table TABREF7. For further experiments, we only use joy, annoyance/anger, and insecurity/fear due to the small sample size for boredom and no emotion and under the assumption that relaxation brings little expressivity.
Methods ::: Emotion Recognition from Facial Expressions
We preprocess the visual data by extracting the sequence of images for each interaction from the point where the agent's or the co-driver's question was completely uttered until the driver's response stops. The average length is 16.3 seconds, with the minimum at 2.2s and the maximum at 54.7s. We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions). It delivers frame-by-frame scores ($\in [0;100]$) for discrete emotional states of joy, anger and fear. While joy corresponds directly to our annotation, we map anger to our label annoyance and fear to our label insecurity. The maximal average score across all frames constitutes the overall classification for the video sequence. Frames where the software is not able to detect the face are ignored.
Methods ::: Emotion Recognition from Audio Signal
We extract the audio signal for the same sequence as described for facial expressions and apply an off-the-shelf tool for emotion recognition. The software delivers single classification scores for a set of 24 discrete emotions for the entire utterance. We consider the outputs for the states of joy, anger, and fear, mapping analogously to our classes as for facial expressions. Low-confidence predictions are interpreted as “no emotion”. We accept the emotion with the highest score as the discrete prediction otherwise.
Methods ::: Emotion Recognition from Transcribed Utterances
For the emotion recognition from text, we manually transcribe all utterances of our AMMER study. To exploit existing and available data sets which are larger than the AMMER data set, we develop a transfer learning approach. We use a neural network with an embedding layer (frozen weights, pre-trained on Common Crawl and Wikipedia BIBREF36), a bidirectional LSTM BIBREF37, and two dense layers followed by a soft max output layer. This setup is inspired by BIBREF38. We use a dropout rate of 0.3 in all layers and optimize with Adam BIBREF39 with a learning rate of $10^{-5}$ (These parameters are the same for all further experiments). We build on top of the Keras library with the TensorFlow backend. We consider this setup our baseline model.
We train models on a variety of corpora, namely the common format published by BIBREF27 of the FigureEight (formally known as Crowdflower) data set of social media, the ISEAR data BIBREF40 (self-reported emotional events), and, the Twitter Emotion Corpus (TEC, weakly annotated Tweets with #anger, #disgust, #fear, #happy, #sadness, and #surprise, Mohammad2012). From all corpora, we use instances with labels fear, anger, or joy. These corpora are English, however, we do predictions on German utterances. Therefore, each corpus is preprocessed to German with Google Translate. We remove URLs, user tags (“@Username”), punctuation and hash signs. The distributions of the data sets are shown in Table TABREF12.
To adapt models trained on these data, we apply transfer learning as follows: The model is first trained until convergence on one out-of-domain corpus (only on classes fear, joy, anger for compatibility reasons). Then, the parameters of the bi-LSTM layer are frozen and the remaining layers are further trained on AMMER. This procedure is illustrated in Figure FIGREF13
Results ::: Facial Expressions and Audio
Table TABREF16 shows the confusion matrices for facial and audio emotion recognition on our complete AMMER data set and Table TABREF17 shows the results per class for each method, including facial and audio data and micro and macro averages. The classification from facial expressions yields a macro-averaged $\text{F}_1$ score of 33 % across the three emotions joy, insecurity, and annoyance (P=0.31, R=0.35). While the classification results for joy are promising (R=43 %, P=57 %), the distinction of insecurity and annoyance from the other classes appears to be more challenging.
Regarding the audio signal, we observe a macro $\text{F}_1$ score of 29 % (P=42 %, R=22 %). There is a bias towards negative emotions, which results in a small number of detected joy predictions (R=4 %). Insecurity and annoyance are frequently confused.
Results ::: Text from Transcribed Utterances
The experimental setting for the evaluation of emotion recognition from text is as follows: We evaluate the BiLSTM model in three different experiments: (1) in-domain, (2) out-of-domain and (3) transfer learning. For all experiments we train on the classes anger/annoyance, fear/insecurity and joy. Table TABREF19 shows all results for the comparison of these experimental settings.
Results ::: Text from Transcribed Utterances ::: Experiment 1: In-Domain application
We first set a baseline by validating our models on established corpora. We train the baseline model on 60 % of each data set listed in Table TABREF12 and evaluate that model with 40 % of the data from the same domain (results shown in the column “In-Domain” in Table TABREF19). Excluding AMMER, we achieve an average micro $\text{F}_1$ of 68 %, with best results of F$_1$=73 % on TEC. The model trained on our AMMER corpus achieves an F1 score of 57%. This is most probably due to the small size of this data set and the class bias towards joy, which makes up more than half of the data set. These results are mostly in line with Bostan2018.
Results ::: Text from Transcribed Utterances ::: Experiment 2: Simple Out-Of-Domain application
Now we analyze how well the models trained in Experiment 1 perform when applied to our data set. The results are shown in column “Simple” in Table TABREF19. We observe a clear drop in performance, with an average of F$_1$=48 %. The best performing model is again the one trained on TEC, en par with the one trained on the Figure8 data. The model trained on ISEAR performs second best in Experiment 1, it performs worst in Experiment 2.
Results ::: Text from Transcribed Utterances ::: Experiment 3: Transfer Learning application
To adapt models trained on previously existing data sets to our particular application, the AMMER corpus, we apply transfer learning. Here, we perform leave-one-out cross validation. As pre-trained models we use each model from Experiment 1 and further optimize with the training subset of each crossvalidation iteration of AMMER. The results are shown in the column “Transfer L.” in Table TABREF19. The confusion matrix is also depicted in Table TABREF16.
With this procedure we achieve an average performance of F$_1$=75 %, being better than the results from the in-domain Experiment 1. The best performance of F$_1$=76 % is achieved with the model pre-trained on each data set, except for ISEAR. All transfer learning models clearly outperform their simple out-of-domain counterpart.
To ensure that this performance increase is not only due to the larger data set, we compare these results to training the model without transfer on a corpus consisting of each corpus together with AMMER (again, in leave-one-out crossvalidation). These results are depicted in column “Joint C.”. Thus, both settings, “transfer learning” and “joint corpus” have access to the same information.
The results show an increase in performance in contrast to not using AMMER for training, however, the transfer approach based on partial retraining the model shows a clear improvement for all models (by 7pp for Figure8, 10pp for EmoInt, 8pp for TEC, 13pp for ISEAR) compared to the ”Joint” setup.
Summary & Future Work
We described the creation of the multimodal AMMER data with emotional speech interactions between a driver and both a virtual agent and a co-driver. We analyzed the modalities of facial expressions, acoustics, and transcribed utterances regarding their potential for emotion recognition during in-car speech interactions. We applied off-the-shelf emotion recognition tools for facial expressions and acoustics. For transcribed text, we developed a neural network-based classifier with transfer learning exploiting existing annotated corpora. We find that analyzing transcribed utterances is most promising for classification of the three emotional states of joy, annoyance and insecurity.
Our results for facial expressions indicate that there is potential for the classification of joy, however, the states of annoyance and insecurity are not well recognized. Future work needs to investigate more sophisticated approaches to map frame predictions to sequence predictions. Furthermore, movements of the mouth region during speech interactions might negatively influence the classification from facial expressions. Therefore, the question remains how facial expressions can best contribute to multimodal detection in speech interactions.
Regarding the classification from the acoustic signal, the application of off-the-shelf classifiers without further adjustments seems to be challenging. We find a strong bias towards negative emotional states for our experimental setting. For instance, the personalization of the recognition algorithm (e. g., mean and standard deviation normalization) could help to adapt the classification for specific speakers and thus to reduce this bias. Further, the acoustic environment in the vehicle interior has special properties and the recognition software might need further adaptations.
Our transfer learning-based text classifier shows considerably better results. This is a substantial result in its own, as only one previous method for transfer learning in emotion recognition has been proposed, in which a sentiment/emotion specific source for labels in pre-training has been used, to the best of our knowledge BIBREF29. Other applications of transfer learning from general language models include BIBREF41, BIBREF42. Our approach is substantially different, not being trained on a huge amount of noisy data, but on smaller out-of-domain sets of higher quality. This result suggests that emotion classification systems which work across domains can be developed with reasonable effort.
For a productive application of emotion detection in the context of speech events we conclude that a deployed system might perform best with a speech-to-text module followed by an analysis of the text. Further, in this work, we did not explore an ensemble model or the interaction of different modalities. Thus, future work should investigate the fusion of multiple modalities in a single classifier.
Acknowledgment
We thank Laura-Ana-Maria Bostan for discussions and data set preparations. This research has partially been funded by the German Research Council (DFG), project SEAT (KL 2869/1-1). | For the emotion recognition from text they use described neural network as baseline.
For audio and face there is no baseline. |
f92ee3c5fce819db540bded3cfcc191e21799cb1 | f92ee3c5fce819db540bded3cfcc191e21799cb1_0 | Q: What are the emotion detection tools used for audio and face input?
Text: Introduction
Automatic emotion recognition is commonly understood as the task of assigning an emotion to a predefined instance, for example an utterance (as audio signal), an image (for instance with a depicted face), or a textual unit (e.g., a transcribed utterance, a sentence, or a Tweet). The set of emotions is often following the original definition by Ekman Ekman1992, which includes anger, fear, disgust, sadness, joy, and surprise, or the extension by Plutchik Plutchik1980 who adds trust and anticipation.
Most work in emotion detection is limited to one modality. Exceptions include Busso2004 and Sebe2005, who investigate multimodal approaches combining speech with facial information. Emotion recognition in speech can utilize semantic features as well BIBREF0. Note that the term “multimodal” is also used beyond the combination of vision, audio, and text. For example, Soleymani2012 use it to refer to the combination of electroencephalogram, pupillary response and gaze distance.
In this paper, we deal with the specific situation of car environments as a testbed for multimodal emotion recognition. This is an interesting environment since it is, to some degree, a controlled environment: Dialogue partners are limited in movement, the degrees of freedom for occurring events are limited, and several sensors which are useful for emotion recognition are already integrated in this setting. More specifically, we focus on emotion recognition from speech events in a dialogue with a human partner and with an intelligent agent.
Also from the application point of view, the domain is a relevant choice: Past research has shown that emotional intelligence is beneficial for human computer interaction. Properly processing emotions in interactions increases the engagement of users and can improve performance when a specific task is to be fulfilled BIBREF1, BIBREF2, BIBREF3, BIBREF4. This is mostly based on the aspect that machines communicating with humans appear to be more trustworthy when they show empathy and are perceived as being natural BIBREF3, BIBREF5, BIBREF4.
Virtual agents play an increasingly important role in the automotive context and the speech modality is increasingly being used in cars due to its potential to limit distraction. It has been shown that adapting the in-car speech interaction system according to the drivers' emotional state can help to enhance security, performance as well as the overall driving experience BIBREF6, BIBREF7.
With this paper, we investigate how each of the three considered modalitites, namely facial expressions, utterances of a driver as an audio signal, and transcribed text contributes to the task of emotion recognition in in-car speech interactions. We focus on the five emotions of joy, insecurity, annoyance, relaxation, and boredom since terms corresponding to so-called fundamental emotions like fear have been shown to be associated to too strong emotional states than being appropriate for the in-car context BIBREF8. Our first contribution is the description of the experimental setup for our data collection. Aiming to provoke specific emotions with situations which can occur in real-world driving scenarios and to induce speech interactions, the study was conducted in a driving simulator. Based on the collected data, we provide baseline predictions with off-the-shelf tools for face and speech emotion recognition and compare them to a neural network-based approach for emotion recognition from text. Our second contribution is the introduction of transfer learning to adapt models trained on established out-of-domain corpora to our use case. We work on German language, therefore the transfer consists of a domain and a language transfer.
Related Work ::: Facial Expressions
A common approach to encode emotions for facial expressions is the facial action coding system FACS BIBREF9, BIBREF10, BIBREF11. As the reliability and reproducability of findings with this method have been critically discussed BIBREF12, the trend has increasingly shifted to perform the recognition directly on images and videos, especially with deep learning. For instance, jung2015joint developed a model which considers temporal geometry features and temporal appearance features from image sequences. kim2016hierarchical propose an ensemble of convolutional neural networks which outperforms isolated networks.
In the automotive domain, FACS is still popular. Ma2017 use support vector machines to distinguish happy, bothered, confused, and concentrated based on data from a natural driving environment. They found that bothered and confused are difficult to distinguish, while happy and concentrated are well identified. Aiming to reduce computational cost, Tews2011 apply a simple feature extraction using four dots in the face defining three facial areas. They analyze the variance of the three facial areas for the recognition of happy, anger and neutral. Ihme2018 aim at detecting frustration in a simulator environment. They induce the emotion with specific scenarios and a demanding secondary task and are able to associate specific face movements according to FACS. Paschero2012 use OpenCV (https://opencv.org/) to detect the eyes and the mouth region and track facial movements. They simulate different lightning conditions and apply a multilayer perceptron for the classification task of Ekman's set of fundamental emotions.
Overall, we found that studies using facial features usually focus on continuous driver monitoring, often in driver-only scenarios. In contrast, our work investigates the potential of emotion recognition during speech interactions.
Related Work ::: Acoustic
Past research on emotion recognition from acoustics mainly concentrates on either feature selection or the development of appropriate classifiers. rao2013emotion as well as ververidis2004automatic compare local and global features in support vector machines. Next to such discriminative approaches, hidden Markov models are well-studied, however, there is no agreement on which feature-based classifier is most suitable BIBREF13. Similar to the facial expression modality, recent efforts on applying deep learning have been increased for acoustic speech processing. For instance, lee2015high use a recurrent neural network and palaz2015analysis apply a convolutional neural network to the raw speech signal. Neumann2017 as well as Trigeorgis2016 analyze the importance of features in the context of deep learning-based emotion recognition.
In the automotive sector, Boril2011 approach the detection of negative emotional states within interactions between driver and co-driver as well as in calls of the driver towards the automated spoken dialogue system. Using real-world driving data, they find that the combination of acoustic features and their respective Gaussian mixture model scores performs best. Schuller2006 collects 2,000 dialog turns directed towards an automotive user interface and investigate the classification of anger, confusion, and neutral. They show that automatic feature generation and feature selection boost the performance of an SVM-based classifier. Further, they analyze the performance under systematically added noise and develop methods to mitigate negative effects. For more details, we refer the reader to the survey by Schuller2018. In this work, we explore the straight-forward application of domain independent software to an in-car scenario without domain-specific adaptations.
Related Work ::: Text
Previous work on emotion analysis in natural language processing focuses either on resource creation or on emotion classification for a specific task and domain. On the side of resource creation, the early and influential work of Pennebaker2015 is a dictionary of words being associated with different psychologically relevant categories, including a subset of emotions. Another popular resource is the NRC dictionary by Mohammad2012b. It contains more than 10000 words for a set of discrete emotion classes. Other resources include WordNet Affect BIBREF14 which distinguishes particular word classes. Further, annotated corpora have been created for a set of different domains, for instance fairy tales BIBREF15, Blogs BIBREF16, Twitter BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, Facebook BIBREF22, news headlines BIBREF23, dialogues BIBREF24, literature BIBREF25, or self reports on emotion events BIBREF26 (see BIBREF27 for an overview).
To automatically assign emotions to textual units, the application of dictionaries has been a popular approach and still is, particularly in domains without annotated corpora. Another approach to overcome the lack of huge amounts of annotated training data in a particular domain or for a specific topic is to exploit distant supervision: use the signal of occurrences of emoticons or specific hashtags or words to automatically label the data. This is sometimes referred to as self-labeling BIBREF21, BIBREF28, BIBREF29, BIBREF30.
A variety of classification approaches have been tested, including SNoW BIBREF15, support vector machines BIBREF16, maximum entropy classification, long short-term memory network, and convolutional neural network models BIBREF18. More recently, the state of the art is the use of transfer learning from noisy annotations to more specific predictions BIBREF29. Still, it has been shown that transferring from one domain to another is challenging, as the way emotions are expressed varies between areas BIBREF27. The approach by Felbo2017 is different to our work as they use a huge noisy data set for pretraining the model while we use small high quality data sets instead.
Recently, the state of the art has also been pushed forward with a set of shared tasks, in which the participants with top results mostly exploit deep learning methods for prediction based on pretrained structures like embeddings or language models BIBREF21, BIBREF31, BIBREF20.
Our work follows this approach and builds up on embeddings with deep learning. Furthermore, we approach the application and adaption of text-based classifiers to the automotive domain with transfer learning.
Data set Collection
The first contribution of this paper is the construction of the AMMER data set which we describe in the following. We focus on the drivers' interactions with both a virtual agent as well as a co-driver. To collect the data in a safe and controlled environment and to be able to consider a variety of predefined driving situations, the study was conducted in a driving simulator.
Data set Collection ::: Study Setup and Design
The study environment consists of a fixed-base driving simulator running Vires's VTD (Virtual Test Drive, v2.2.0) simulation software (https://vires.com/vtd-vires-virtual-test-drive/). The vehicle has an automatic transmission, a steering wheel and gas and brake pedals. We collect data from video, speech and biosignals (Empatica E4 to record heart rate, electrodermal activity, skin temperature, not further used in this paper) and questionnaires. Two RGB cameras are fixed in the vehicle to capture the drivers face, one at the sun shield above the drivers seat and one in the middle of the dashboard. A microphone is placed on the center console. One experimenter sits next to the driver, the other behind the simulator. The virtual agent accompanying the drive is realized as Wizard-of-Oz prototype which enables the experimenter to manually trigger prerecorded voice samples playing trough the in-car speakers and to bring new content to the center screen. Figure FIGREF4 shows the driving simulator.
The experimental setting is comparable to an everyday driving task. Participants are told that the goal of the study is to evaluate and to improve an intelligent driving assistant. To increase the probability of emotions to arise, participants are instructed to reach the destination of the route as fast as possible while following traffic rules and speed limits. They are informed that the time needed for the task would be compared to other participants. The route comprises highways, rural roads, and city streets. A navigation system with voice commands and information on the screen keeps the participants on the predefined track.
To trigger emotion changes in the participant, we use the following events: (i) a car on the right lane cutting off to the left lane when participants try to overtake followed by trucks blocking both lanes with a slow overtaking maneuver (ii) a skateboarder who appears unexpectedly on the street and (iii) participants are praised for reaching the destination unexpectedly quickly in comparison to previous participants.
Based on these events, we trigger three interactions (Table TABREF6 provides examples) with the intelligent agent (Driver-Agent Interactions, D–A). Pretending to be aware of the current situation, e. g., to recognize unusual driving behavior such as strong braking, the agent asks the driver to explain his subjective perception of these events in detail. Additionally, we trigger two more interactions with the intelligent agent at the beginning and at the end of the drive, where participants are asked to describe their mood and thoughts regarding the (upcoming) drive. This results in five interactions between the driver and the virtual agent.
Furthermore, the co-driver asks three different questions during sessions with light traffic and low cognitive demand (Driver-Co-Driver Interactions, D–Co). These questions are more general and non-traffic-related and aim at triggering the participants' memory and fantasy. Participants are asked to describe their last vacation, their dream house and their idea of the perfect job. In sum, there are eight interactions per participant (5 D–A, 3 D–Co).
Data set Collection ::: Procedure
At the beginning of the study, participants were welcomed and the upcoming study procedure was explained. Subsequently, participants signed a consent form and completed a questionnaire to provide demographic information. After that, the co-driving experimenter started with the instruction in the simulator which was followed by a familiarization drive consisting of highway and city driving and covering different driving maneuvers such as tight corners, lane changing and strong braking. Subsequently, participants started with the main driving task. The drive had a duration of 20 minutes containing the eight previously mentioned speech interactions. After the completion of the drive, the actual goal of improving automatic emotional recognition was revealed and a standard emotional intelligence questionnaire, namely the TEIQue-SF BIBREF32, was handed to the participants. Finally, a retrospective interview was conducted, in which participants were played recordings of their in-car interactions and asked to give discrete (annoyance, insecurity, joy, relaxation, boredom, none, following BIBREF8) was well as dimensional (valence, arousal, dominance BIBREF33 on a 11-point scale) emotion ratings for the interactions and the according situations. We only use the discrete class annotations in this paper.
Data set Collection ::: Data Analysis
Overall, 36 participants aged 18 to 64 years ($\mu $=28.89, $\sigma $=12.58) completed the experiment. This leads to 288 interactions, 180 between driver and the agent and 108 between driver and co-driver. The emotion self-ratings from the participants yielded 90 utterances labeled with joy, 26 with annoyance, 49 with insecurity, 9 with boredom, 111 with relaxation and 3 with no emotion. One example interaction per interaction type and emotion is shown in Table TABREF7. For further experiments, we only use joy, annoyance/anger, and insecurity/fear due to the small sample size for boredom and no emotion and under the assumption that relaxation brings little expressivity.
Methods ::: Emotion Recognition from Facial Expressions
We preprocess the visual data by extracting the sequence of images for each interaction from the point where the agent's or the co-driver's question was completely uttered until the driver's response stops. The average length is 16.3 seconds, with the minimum at 2.2s and the maximum at 54.7s. We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions). It delivers frame-by-frame scores ($\in [0;100]$) for discrete emotional states of joy, anger and fear. While joy corresponds directly to our annotation, we map anger to our label annoyance and fear to our label insecurity. The maximal average score across all frames constitutes the overall classification for the video sequence. Frames where the software is not able to detect the face are ignored.
Methods ::: Emotion Recognition from Audio Signal
We extract the audio signal for the same sequence as described for facial expressions and apply an off-the-shelf tool for emotion recognition. The software delivers single classification scores for a set of 24 discrete emotions for the entire utterance. We consider the outputs for the states of joy, anger, and fear, mapping analogously to our classes as for facial expressions. Low-confidence predictions are interpreted as “no emotion”. We accept the emotion with the highest score as the discrete prediction otherwise.
Methods ::: Emotion Recognition from Transcribed Utterances
For the emotion recognition from text, we manually transcribe all utterances of our AMMER study. To exploit existing and available data sets which are larger than the AMMER data set, we develop a transfer learning approach. We use a neural network with an embedding layer (frozen weights, pre-trained on Common Crawl and Wikipedia BIBREF36), a bidirectional LSTM BIBREF37, and two dense layers followed by a soft max output layer. This setup is inspired by BIBREF38. We use a dropout rate of 0.3 in all layers and optimize with Adam BIBREF39 with a learning rate of $10^{-5}$ (These parameters are the same for all further experiments). We build on top of the Keras library with the TensorFlow backend. We consider this setup our baseline model.
We train models on a variety of corpora, namely the common format published by BIBREF27 of the FigureEight (formally known as Crowdflower) data set of social media, the ISEAR data BIBREF40 (self-reported emotional events), and, the Twitter Emotion Corpus (TEC, weakly annotated Tweets with #anger, #disgust, #fear, #happy, #sadness, and #surprise, Mohammad2012). From all corpora, we use instances with labels fear, anger, or joy. These corpora are English, however, we do predictions on German utterances. Therefore, each corpus is preprocessed to German with Google Translate. We remove URLs, user tags (“@Username”), punctuation and hash signs. The distributions of the data sets are shown in Table TABREF12.
To adapt models trained on these data, we apply transfer learning as follows: The model is first trained until convergence on one out-of-domain corpus (only on classes fear, joy, anger for compatibility reasons). Then, the parameters of the bi-LSTM layer are frozen and the remaining layers are further trained on AMMER. This procedure is illustrated in Figure FIGREF13
Results ::: Facial Expressions and Audio
Table TABREF16 shows the confusion matrices for facial and audio emotion recognition on our complete AMMER data set and Table TABREF17 shows the results per class for each method, including facial and audio data and micro and macro averages. The classification from facial expressions yields a macro-averaged $\text{F}_1$ score of 33 % across the three emotions joy, insecurity, and annoyance (P=0.31, R=0.35). While the classification results for joy are promising (R=43 %, P=57 %), the distinction of insecurity and annoyance from the other classes appears to be more challenging.
Regarding the audio signal, we observe a macro $\text{F}_1$ score of 29 % (P=42 %, R=22 %). There is a bias towards negative emotions, which results in a small number of detected joy predictions (R=4 %). Insecurity and annoyance are frequently confused.
Results ::: Text from Transcribed Utterances
The experimental setting for the evaluation of emotion recognition from text is as follows: We evaluate the BiLSTM model in three different experiments: (1) in-domain, (2) out-of-domain and (3) transfer learning. For all experiments we train on the classes anger/annoyance, fear/insecurity and joy. Table TABREF19 shows all results for the comparison of these experimental settings.
Results ::: Text from Transcribed Utterances ::: Experiment 1: In-Domain application
We first set a baseline by validating our models on established corpora. We train the baseline model on 60 % of each data set listed in Table TABREF12 and evaluate that model with 40 % of the data from the same domain (results shown in the column “In-Domain” in Table TABREF19). Excluding AMMER, we achieve an average micro $\text{F}_1$ of 68 %, with best results of F$_1$=73 % on TEC. The model trained on our AMMER corpus achieves an F1 score of 57%. This is most probably due to the small size of this data set and the class bias towards joy, which makes up more than half of the data set. These results are mostly in line with Bostan2018.
Results ::: Text from Transcribed Utterances ::: Experiment 2: Simple Out-Of-Domain application
Now we analyze how well the models trained in Experiment 1 perform when applied to our data set. The results are shown in column “Simple” in Table TABREF19. We observe a clear drop in performance, with an average of F$_1$=48 %. The best performing model is again the one trained on TEC, en par with the one trained on the Figure8 data. The model trained on ISEAR performs second best in Experiment 1, it performs worst in Experiment 2.
Results ::: Text from Transcribed Utterances ::: Experiment 3: Transfer Learning application
To adapt models trained on previously existing data sets to our particular application, the AMMER corpus, we apply transfer learning. Here, we perform leave-one-out cross validation. As pre-trained models we use each model from Experiment 1 and further optimize with the training subset of each crossvalidation iteration of AMMER. The results are shown in the column “Transfer L.” in Table TABREF19. The confusion matrix is also depicted in Table TABREF16.
With this procedure we achieve an average performance of F$_1$=75 %, being better than the results from the in-domain Experiment 1. The best performance of F$_1$=76 % is achieved with the model pre-trained on each data set, except for ISEAR. All transfer learning models clearly outperform their simple out-of-domain counterpart.
To ensure that this performance increase is not only due to the larger data set, we compare these results to training the model without transfer on a corpus consisting of each corpus together with AMMER (again, in leave-one-out crossvalidation). These results are depicted in column “Joint C.”. Thus, both settings, “transfer learning” and “joint corpus” have access to the same information.
The results show an increase in performance in contrast to not using AMMER for training, however, the transfer approach based on partial retraining the model shows a clear improvement for all models (by 7pp for Figure8, 10pp for EmoInt, 8pp for TEC, 13pp for ISEAR) compared to the ”Joint” setup.
Summary & Future Work
We described the creation of the multimodal AMMER data with emotional speech interactions between a driver and both a virtual agent and a co-driver. We analyzed the modalities of facial expressions, acoustics, and transcribed utterances regarding their potential for emotion recognition during in-car speech interactions. We applied off-the-shelf emotion recognition tools for facial expressions and acoustics. For transcribed text, we developed a neural network-based classifier with transfer learning exploiting existing annotated corpora. We find that analyzing transcribed utterances is most promising for classification of the three emotional states of joy, annoyance and insecurity.
Our results for facial expressions indicate that there is potential for the classification of joy, however, the states of annoyance and insecurity are not well recognized. Future work needs to investigate more sophisticated approaches to map frame predictions to sequence predictions. Furthermore, movements of the mouth region during speech interactions might negatively influence the classification from facial expressions. Therefore, the question remains how facial expressions can best contribute to multimodal detection in speech interactions.
Regarding the classification from the acoustic signal, the application of off-the-shelf classifiers without further adjustments seems to be challenging. We find a strong bias towards negative emotional states for our experimental setting. For instance, the personalization of the recognition algorithm (e. g., mean and standard deviation normalization) could help to adapt the classification for specific speakers and thus to reduce this bias. Further, the acoustic environment in the vehicle interior has special properties and the recognition software might need further adaptations.
Our transfer learning-based text classifier shows considerably better results. This is a substantial result in its own, as only one previous method for transfer learning in emotion recognition has been proposed, in which a sentiment/emotion specific source for labels in pre-training has been used, to the best of our knowledge BIBREF29. Other applications of transfer learning from general language models include BIBREF41, BIBREF42. Our approach is substantially different, not being trained on a huge amount of noisy data, but on smaller out-of-domain sets of higher quality. This result suggests that emotion classification systems which work across domains can be developed with reasonable effort.
For a productive application of emotion detection in the context of speech events we conclude that a deployed system might perform best with a speech-to-text module followed by an analysis of the text. Further, in this work, we did not explore an ensemble model or the interaction of different modalities. Thus, future work should investigate the fusion of multiple modalities in a single classifier.
Acknowledgment
We thank Laura-Ana-Maria Bostan for discussions and data set preparations. This research has partially been funded by the German Research Council (DFG), project SEAT (KL 2869/1-1). | We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions) |
f92ee3c5fce819db540bded3cfcc191e21799cb1 | f92ee3c5fce819db540bded3cfcc191e21799cb1_1 | Q: What are the emotion detection tools used for audio and face input?
Text: Introduction
Automatic emotion recognition is commonly understood as the task of assigning an emotion to a predefined instance, for example an utterance (as audio signal), an image (for instance with a depicted face), or a textual unit (e.g., a transcribed utterance, a sentence, or a Tweet). The set of emotions is often following the original definition by Ekman Ekman1992, which includes anger, fear, disgust, sadness, joy, and surprise, or the extension by Plutchik Plutchik1980 who adds trust and anticipation.
Most work in emotion detection is limited to one modality. Exceptions include Busso2004 and Sebe2005, who investigate multimodal approaches combining speech with facial information. Emotion recognition in speech can utilize semantic features as well BIBREF0. Note that the term “multimodal” is also used beyond the combination of vision, audio, and text. For example, Soleymani2012 use it to refer to the combination of electroencephalogram, pupillary response and gaze distance.
In this paper, we deal with the specific situation of car environments as a testbed for multimodal emotion recognition. This is an interesting environment since it is, to some degree, a controlled environment: Dialogue partners are limited in movement, the degrees of freedom for occurring events are limited, and several sensors which are useful for emotion recognition are already integrated in this setting. More specifically, we focus on emotion recognition from speech events in a dialogue with a human partner and with an intelligent agent.
Also from the application point of view, the domain is a relevant choice: Past research has shown that emotional intelligence is beneficial for human computer interaction. Properly processing emotions in interactions increases the engagement of users and can improve performance when a specific task is to be fulfilled BIBREF1, BIBREF2, BIBREF3, BIBREF4. This is mostly based on the aspect that machines communicating with humans appear to be more trustworthy when they show empathy and are perceived as being natural BIBREF3, BIBREF5, BIBREF4.
Virtual agents play an increasingly important role in the automotive context and the speech modality is increasingly being used in cars due to its potential to limit distraction. It has been shown that adapting the in-car speech interaction system according to the drivers' emotional state can help to enhance security, performance as well as the overall driving experience BIBREF6, BIBREF7.
With this paper, we investigate how each of the three considered modalitites, namely facial expressions, utterances of a driver as an audio signal, and transcribed text contributes to the task of emotion recognition in in-car speech interactions. We focus on the five emotions of joy, insecurity, annoyance, relaxation, and boredom since terms corresponding to so-called fundamental emotions like fear have been shown to be associated to too strong emotional states than being appropriate for the in-car context BIBREF8. Our first contribution is the description of the experimental setup for our data collection. Aiming to provoke specific emotions with situations which can occur in real-world driving scenarios and to induce speech interactions, the study was conducted in a driving simulator. Based on the collected data, we provide baseline predictions with off-the-shelf tools for face and speech emotion recognition and compare them to a neural network-based approach for emotion recognition from text. Our second contribution is the introduction of transfer learning to adapt models trained on established out-of-domain corpora to our use case. We work on German language, therefore the transfer consists of a domain and a language transfer.
Related Work ::: Facial Expressions
A common approach to encode emotions for facial expressions is the facial action coding system FACS BIBREF9, BIBREF10, BIBREF11. As the reliability and reproducability of findings with this method have been critically discussed BIBREF12, the trend has increasingly shifted to perform the recognition directly on images and videos, especially with deep learning. For instance, jung2015joint developed a model which considers temporal geometry features and temporal appearance features from image sequences. kim2016hierarchical propose an ensemble of convolutional neural networks which outperforms isolated networks.
In the automotive domain, FACS is still popular. Ma2017 use support vector machines to distinguish happy, bothered, confused, and concentrated based on data from a natural driving environment. They found that bothered and confused are difficult to distinguish, while happy and concentrated are well identified. Aiming to reduce computational cost, Tews2011 apply a simple feature extraction using four dots in the face defining three facial areas. They analyze the variance of the three facial areas for the recognition of happy, anger and neutral. Ihme2018 aim at detecting frustration in a simulator environment. They induce the emotion with specific scenarios and a demanding secondary task and are able to associate specific face movements according to FACS. Paschero2012 use OpenCV (https://opencv.org/) to detect the eyes and the mouth region and track facial movements. They simulate different lightning conditions and apply a multilayer perceptron for the classification task of Ekman's set of fundamental emotions.
Overall, we found that studies using facial features usually focus on continuous driver monitoring, often in driver-only scenarios. In contrast, our work investigates the potential of emotion recognition during speech interactions.
Related Work ::: Acoustic
Past research on emotion recognition from acoustics mainly concentrates on either feature selection or the development of appropriate classifiers. rao2013emotion as well as ververidis2004automatic compare local and global features in support vector machines. Next to such discriminative approaches, hidden Markov models are well-studied, however, there is no agreement on which feature-based classifier is most suitable BIBREF13. Similar to the facial expression modality, recent efforts on applying deep learning have been increased for acoustic speech processing. For instance, lee2015high use a recurrent neural network and palaz2015analysis apply a convolutional neural network to the raw speech signal. Neumann2017 as well as Trigeorgis2016 analyze the importance of features in the context of deep learning-based emotion recognition.
In the automotive sector, Boril2011 approach the detection of negative emotional states within interactions between driver and co-driver as well as in calls of the driver towards the automated spoken dialogue system. Using real-world driving data, they find that the combination of acoustic features and their respective Gaussian mixture model scores performs best. Schuller2006 collects 2,000 dialog turns directed towards an automotive user interface and investigate the classification of anger, confusion, and neutral. They show that automatic feature generation and feature selection boost the performance of an SVM-based classifier. Further, they analyze the performance under systematically added noise and develop methods to mitigate negative effects. For more details, we refer the reader to the survey by Schuller2018. In this work, we explore the straight-forward application of domain independent software to an in-car scenario without domain-specific adaptations.
Related Work ::: Text
Previous work on emotion analysis in natural language processing focuses either on resource creation or on emotion classification for a specific task and domain. On the side of resource creation, the early and influential work of Pennebaker2015 is a dictionary of words being associated with different psychologically relevant categories, including a subset of emotions. Another popular resource is the NRC dictionary by Mohammad2012b. It contains more than 10000 words for a set of discrete emotion classes. Other resources include WordNet Affect BIBREF14 which distinguishes particular word classes. Further, annotated corpora have been created for a set of different domains, for instance fairy tales BIBREF15, Blogs BIBREF16, Twitter BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, Facebook BIBREF22, news headlines BIBREF23, dialogues BIBREF24, literature BIBREF25, or self reports on emotion events BIBREF26 (see BIBREF27 for an overview).
To automatically assign emotions to textual units, the application of dictionaries has been a popular approach and still is, particularly in domains without annotated corpora. Another approach to overcome the lack of huge amounts of annotated training data in a particular domain or for a specific topic is to exploit distant supervision: use the signal of occurrences of emoticons or specific hashtags or words to automatically label the data. This is sometimes referred to as self-labeling BIBREF21, BIBREF28, BIBREF29, BIBREF30.
A variety of classification approaches have been tested, including SNoW BIBREF15, support vector machines BIBREF16, maximum entropy classification, long short-term memory network, and convolutional neural network models BIBREF18. More recently, the state of the art is the use of transfer learning from noisy annotations to more specific predictions BIBREF29. Still, it has been shown that transferring from one domain to another is challenging, as the way emotions are expressed varies between areas BIBREF27. The approach by Felbo2017 is different to our work as they use a huge noisy data set for pretraining the model while we use small high quality data sets instead.
Recently, the state of the art has also been pushed forward with a set of shared tasks, in which the participants with top results mostly exploit deep learning methods for prediction based on pretrained structures like embeddings or language models BIBREF21, BIBREF31, BIBREF20.
Our work follows this approach and builds up on embeddings with deep learning. Furthermore, we approach the application and adaption of text-based classifiers to the automotive domain with transfer learning.
Data set Collection
The first contribution of this paper is the construction of the AMMER data set which we describe in the following. We focus on the drivers' interactions with both a virtual agent as well as a co-driver. To collect the data in a safe and controlled environment and to be able to consider a variety of predefined driving situations, the study was conducted in a driving simulator.
Data set Collection ::: Study Setup and Design
The study environment consists of a fixed-base driving simulator running Vires's VTD (Virtual Test Drive, v2.2.0) simulation software (https://vires.com/vtd-vires-virtual-test-drive/). The vehicle has an automatic transmission, a steering wheel and gas and brake pedals. We collect data from video, speech and biosignals (Empatica E4 to record heart rate, electrodermal activity, skin temperature, not further used in this paper) and questionnaires. Two RGB cameras are fixed in the vehicle to capture the drivers face, one at the sun shield above the drivers seat and one in the middle of the dashboard. A microphone is placed on the center console. One experimenter sits next to the driver, the other behind the simulator. The virtual agent accompanying the drive is realized as Wizard-of-Oz prototype which enables the experimenter to manually trigger prerecorded voice samples playing trough the in-car speakers and to bring new content to the center screen. Figure FIGREF4 shows the driving simulator.
The experimental setting is comparable to an everyday driving task. Participants are told that the goal of the study is to evaluate and to improve an intelligent driving assistant. To increase the probability of emotions to arise, participants are instructed to reach the destination of the route as fast as possible while following traffic rules and speed limits. They are informed that the time needed for the task would be compared to other participants. The route comprises highways, rural roads, and city streets. A navigation system with voice commands and information on the screen keeps the participants on the predefined track.
To trigger emotion changes in the participant, we use the following events: (i) a car on the right lane cutting off to the left lane when participants try to overtake followed by trucks blocking both lanes with a slow overtaking maneuver (ii) a skateboarder who appears unexpectedly on the street and (iii) participants are praised for reaching the destination unexpectedly quickly in comparison to previous participants.
Based on these events, we trigger three interactions (Table TABREF6 provides examples) with the intelligent agent (Driver-Agent Interactions, D–A). Pretending to be aware of the current situation, e. g., to recognize unusual driving behavior such as strong braking, the agent asks the driver to explain his subjective perception of these events in detail. Additionally, we trigger two more interactions with the intelligent agent at the beginning and at the end of the drive, where participants are asked to describe their mood and thoughts regarding the (upcoming) drive. This results in five interactions between the driver and the virtual agent.
Furthermore, the co-driver asks three different questions during sessions with light traffic and low cognitive demand (Driver-Co-Driver Interactions, D–Co). These questions are more general and non-traffic-related and aim at triggering the participants' memory and fantasy. Participants are asked to describe their last vacation, their dream house and their idea of the perfect job. In sum, there are eight interactions per participant (5 D–A, 3 D–Co).
Data set Collection ::: Procedure
At the beginning of the study, participants were welcomed and the upcoming study procedure was explained. Subsequently, participants signed a consent form and completed a questionnaire to provide demographic information. After that, the co-driving experimenter started with the instruction in the simulator which was followed by a familiarization drive consisting of highway and city driving and covering different driving maneuvers such as tight corners, lane changing and strong braking. Subsequently, participants started with the main driving task. The drive had a duration of 20 minutes containing the eight previously mentioned speech interactions. After the completion of the drive, the actual goal of improving automatic emotional recognition was revealed and a standard emotional intelligence questionnaire, namely the TEIQue-SF BIBREF32, was handed to the participants. Finally, a retrospective interview was conducted, in which participants were played recordings of their in-car interactions and asked to give discrete (annoyance, insecurity, joy, relaxation, boredom, none, following BIBREF8) was well as dimensional (valence, arousal, dominance BIBREF33 on a 11-point scale) emotion ratings for the interactions and the according situations. We only use the discrete class annotations in this paper.
Data set Collection ::: Data Analysis
Overall, 36 participants aged 18 to 64 years ($\mu $=28.89, $\sigma $=12.58) completed the experiment. This leads to 288 interactions, 180 between driver and the agent and 108 between driver and co-driver. The emotion self-ratings from the participants yielded 90 utterances labeled with joy, 26 with annoyance, 49 with insecurity, 9 with boredom, 111 with relaxation and 3 with no emotion. One example interaction per interaction type and emotion is shown in Table TABREF7. For further experiments, we only use joy, annoyance/anger, and insecurity/fear due to the small sample size for boredom and no emotion and under the assumption that relaxation brings little expressivity.
Methods ::: Emotion Recognition from Facial Expressions
We preprocess the visual data by extracting the sequence of images for each interaction from the point where the agent's or the co-driver's question was completely uttered until the driver's response stops. The average length is 16.3 seconds, with the minimum at 2.2s and the maximum at 54.7s. We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions). It delivers frame-by-frame scores ($\in [0;100]$) for discrete emotional states of joy, anger and fear. While joy corresponds directly to our annotation, we map anger to our label annoyance and fear to our label insecurity. The maximal average score across all frames constitutes the overall classification for the video sequence. Frames where the software is not able to detect the face are ignored.
Methods ::: Emotion Recognition from Audio Signal
We extract the audio signal for the same sequence as described for facial expressions and apply an off-the-shelf tool for emotion recognition. The software delivers single classification scores for a set of 24 discrete emotions for the entire utterance. We consider the outputs for the states of joy, anger, and fear, mapping analogously to our classes as for facial expressions. Low-confidence predictions are interpreted as “no emotion”. We accept the emotion with the highest score as the discrete prediction otherwise.
Methods ::: Emotion Recognition from Transcribed Utterances
For the emotion recognition from text, we manually transcribe all utterances of our AMMER study. To exploit existing and available data sets which are larger than the AMMER data set, we develop a transfer learning approach. We use a neural network with an embedding layer (frozen weights, pre-trained on Common Crawl and Wikipedia BIBREF36), a bidirectional LSTM BIBREF37, and two dense layers followed by a soft max output layer. This setup is inspired by BIBREF38. We use a dropout rate of 0.3 in all layers and optimize with Adam BIBREF39 with a learning rate of $10^{-5}$ (These parameters are the same for all further experiments). We build on top of the Keras library with the TensorFlow backend. We consider this setup our baseline model.
We train models on a variety of corpora, namely the common format published by BIBREF27 of the FigureEight (formally known as Crowdflower) data set of social media, the ISEAR data BIBREF40 (self-reported emotional events), and, the Twitter Emotion Corpus (TEC, weakly annotated Tweets with #anger, #disgust, #fear, #happy, #sadness, and #surprise, Mohammad2012). From all corpora, we use instances with labels fear, anger, or joy. These corpora are English, however, we do predictions on German utterances. Therefore, each corpus is preprocessed to German with Google Translate. We remove URLs, user tags (“@Username”), punctuation and hash signs. The distributions of the data sets are shown in Table TABREF12.
To adapt models trained on these data, we apply transfer learning as follows: The model is first trained until convergence on one out-of-domain corpus (only on classes fear, joy, anger for compatibility reasons). Then, the parameters of the bi-LSTM layer are frozen and the remaining layers are further trained on AMMER. This procedure is illustrated in Figure FIGREF13
Results ::: Facial Expressions and Audio
Table TABREF16 shows the confusion matrices for facial and audio emotion recognition on our complete AMMER data set and Table TABREF17 shows the results per class for each method, including facial and audio data and micro and macro averages. The classification from facial expressions yields a macro-averaged $\text{F}_1$ score of 33 % across the three emotions joy, insecurity, and annoyance (P=0.31, R=0.35). While the classification results for joy are promising (R=43 %, P=57 %), the distinction of insecurity and annoyance from the other classes appears to be more challenging.
Regarding the audio signal, we observe a macro $\text{F}_1$ score of 29 % (P=42 %, R=22 %). There is a bias towards negative emotions, which results in a small number of detected joy predictions (R=4 %). Insecurity and annoyance are frequently confused.
Results ::: Text from Transcribed Utterances
The experimental setting for the evaluation of emotion recognition from text is as follows: We evaluate the BiLSTM model in three different experiments: (1) in-domain, (2) out-of-domain and (3) transfer learning. For all experiments we train on the classes anger/annoyance, fear/insecurity and joy. Table TABREF19 shows all results for the comparison of these experimental settings.
Results ::: Text from Transcribed Utterances ::: Experiment 1: In-Domain application
We first set a baseline by validating our models on established corpora. We train the baseline model on 60 % of each data set listed in Table TABREF12 and evaluate that model with 40 % of the data from the same domain (results shown in the column “In-Domain” in Table TABREF19). Excluding AMMER, we achieve an average micro $\text{F}_1$ of 68 %, with best results of F$_1$=73 % on TEC. The model trained on our AMMER corpus achieves an F1 score of 57%. This is most probably due to the small size of this data set and the class bias towards joy, which makes up more than half of the data set. These results are mostly in line with Bostan2018.
Results ::: Text from Transcribed Utterances ::: Experiment 2: Simple Out-Of-Domain application
Now we analyze how well the models trained in Experiment 1 perform when applied to our data set. The results are shown in column “Simple” in Table TABREF19. We observe a clear drop in performance, with an average of F$_1$=48 %. The best performing model is again the one trained on TEC, en par with the one trained on the Figure8 data. The model trained on ISEAR performs second best in Experiment 1, it performs worst in Experiment 2.
Results ::: Text from Transcribed Utterances ::: Experiment 3: Transfer Learning application
To adapt models trained on previously existing data sets to our particular application, the AMMER corpus, we apply transfer learning. Here, we perform leave-one-out cross validation. As pre-trained models we use each model from Experiment 1 and further optimize with the training subset of each crossvalidation iteration of AMMER. The results are shown in the column “Transfer L.” in Table TABREF19. The confusion matrix is also depicted in Table TABREF16.
With this procedure we achieve an average performance of F$_1$=75 %, being better than the results from the in-domain Experiment 1. The best performance of F$_1$=76 % is achieved with the model pre-trained on each data set, except for ISEAR. All transfer learning models clearly outperform their simple out-of-domain counterpart.
To ensure that this performance increase is not only due to the larger data set, we compare these results to training the model without transfer on a corpus consisting of each corpus together with AMMER (again, in leave-one-out crossvalidation). These results are depicted in column “Joint C.”. Thus, both settings, “transfer learning” and “joint corpus” have access to the same information.
The results show an increase in performance in contrast to not using AMMER for training, however, the transfer approach based on partial retraining the model shows a clear improvement for all models (by 7pp for Figure8, 10pp for EmoInt, 8pp for TEC, 13pp for ISEAR) compared to the ”Joint” setup.
Summary & Future Work
We described the creation of the multimodal AMMER data with emotional speech interactions between a driver and both a virtual agent and a co-driver. We analyzed the modalities of facial expressions, acoustics, and transcribed utterances regarding their potential for emotion recognition during in-car speech interactions. We applied off-the-shelf emotion recognition tools for facial expressions and acoustics. For transcribed text, we developed a neural network-based classifier with transfer learning exploiting existing annotated corpora. We find that analyzing transcribed utterances is most promising for classification of the three emotional states of joy, annoyance and insecurity.
Our results for facial expressions indicate that there is potential for the classification of joy, however, the states of annoyance and insecurity are not well recognized. Future work needs to investigate more sophisticated approaches to map frame predictions to sequence predictions. Furthermore, movements of the mouth region during speech interactions might negatively influence the classification from facial expressions. Therefore, the question remains how facial expressions can best contribute to multimodal detection in speech interactions.
Regarding the classification from the acoustic signal, the application of off-the-shelf classifiers without further adjustments seems to be challenging. We find a strong bias towards negative emotional states for our experimental setting. For instance, the personalization of the recognition algorithm (e. g., mean and standard deviation normalization) could help to adapt the classification for specific speakers and thus to reduce this bias. Further, the acoustic environment in the vehicle interior has special properties and the recognition software might need further adaptations.
Our transfer learning-based text classifier shows considerably better results. This is a substantial result in its own, as only one previous method for transfer learning in emotion recognition has been proposed, in which a sentiment/emotion specific source for labels in pre-training has been used, to the best of our knowledge BIBREF29. Other applications of transfer learning from general language models include BIBREF41, BIBREF42. Our approach is substantially different, not being trained on a huge amount of noisy data, but on smaller out-of-domain sets of higher quality. This result suggests that emotion classification systems which work across domains can be developed with reasonable effort.
For a productive application of emotion detection in the context of speech events we conclude that a deployed system might perform best with a speech-to-text module followed by an analysis of the text. Further, in this work, we did not explore an ensemble model or the interaction of different modalities. Thus, future work should investigate the fusion of multiple modalities in a single classifier.
Acknowledgment
We thank Laura-Ana-Maria Bostan for discussions and data set preparations. This research has partially been funded by the German Research Council (DFG), project SEAT (KL 2869/1-1). | cannot be disclosed due to licensing restrictions |
4547818a3bbb727c4bb4a76554b5a5a7b5c5fedb | 4547818a3bbb727c4bb4a76554b5a5a7b5c5fedb_0 | Q: what amounts of size were used on german-english?
Text: Introduction
While neural machine translation (NMT) has achieved impressive performance in high-resource data conditions, becoming dominant in the field BIBREF0 , BIBREF1 , BIBREF2 , recent research has argued that these models are highly data-inefficient, and underperform phrase-based statistical machine translation (PBSMT) or unsupervised methods in low-data conditions BIBREF3 , BIBREF4 . In this paper, we re-assess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. Our main contributions are as follows:
Low-Resource Translation Quality Compared Across Systems
Figure FIGREF4 reproduces a plot by BIBREF3 which shows that their NMT system only outperforms their PBSMT system when more than 100 million words (approx. 5 million sentences) of parallel training data are available. Results shown by BIBREF4 are similar, showing that unsupervised NMT outperforms supervised systems if few parallel resources are available. In both papers, NMT systems are trained with hyperparameters that are typical for high-resource settings, and the authors did not tune hyperparameters, or change network architectures, to optimize NMT for low-resource conditions.
Improving Low-Resource Neural Machine Translation
The bulk of research on low-resource NMT has focused on exploiting monolingual data, or parallel data involving other language pairs. Methods to improve NMT with monolingual data range from the integration of a separately trained language model BIBREF5 to the training of parts of the NMT model with additional objectives, including a language modelling objective BIBREF5 , BIBREF6 , BIBREF7 , an autoencoding objective BIBREF8 , BIBREF9 , or a round-trip objective, where the model is trained to predict monolingual (target-side) training data that has been back-translated into the source language BIBREF6 , BIBREF10 , BIBREF11 . As an extreme case, models that rely exclusively on monolingual data have been shown to work BIBREF12 , BIBREF13 , BIBREF14 , BIBREF4 . Similarly, parallel data from other language pairs can be used to pre-train the network or jointly learn representations BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 .
While semi-supervised and unsupervised approaches have been shown to be very effective for some language pairs, their effectiveness depends on the availability of large amounts of suitable auxiliary data, and other conditions being met. For example, the effectiveness of unsupervised methods is impaired when languages are morphologically different, or when training domains do not match BIBREF22
More broadly, this line of research still accepts the premise that NMT models are data-inefficient and require large amounts of auxiliary data to train. In this work, we want to re-visit this point, and will focus on techniques to make more efficient use of small amounts of parallel training data. Low-resource NMT without auxiliary data has received less attention; work in this direction includes BIBREF23 , BIBREF24 .
Mainstream Improvements
We consider the hyperparameters used by BIBREF3 to be our baseline. This baseline does not make use of various advances in NMT architectures and training tricks. In contrast to the baseline, we use a BiDeep RNN architecture BIBREF25 , label smoothing BIBREF26 , dropout BIBREF27 , word dropout BIBREF28 , layer normalization BIBREF29 and tied embeddings BIBREF30 .
Language Representation
Subword representations such as BPE BIBREF31 have become a popular choice to achieve open-vocabulary translation. BPE has one hyperparameter, the number of merge operations, which determines the size of the final vocabulary. For high-resource settings, the effect of vocabulary size on translation quality is relatively small; BIBREF32 report mixed results when comparing vocabularies of 30k and 90k subwords.
In low-resource settings, large vocabularies result in low-frequency (sub)words being represented as atomic units at training time, and the ability to learn good high-dimensional representations of these is doubtful. BIBREF33 propose a minimum frequency threshold for subword units, and splitting any less frequent subword into smaller units or characters. We expect that such a threshold reduces the need to carefully tune the vocabulary size to the dataset, leading to more aggressive segmentation on smaller datasets.
Hyperparameter Tuning
Due to long training times, hyperparameters are hard to optimize by grid search, and are often re-used across experiments. However, best practices differ between high-resource and low-resource settings. While the trend in high-resource settings is towards using larger and deeper models, BIBREF24 use smaller and fewer layers for smaller datasets. Previous work has argued for larger batch sizes in NMT BIBREF35 , BIBREF36 , but we find that using smaller batches is beneficial in low-resource settings. More aggressive dropout, including dropping whole words at random BIBREF37 , is also likely to be more important. We report results on a narrow hyperparameter search guided by previous work and our own intuition.
Lexical Model
Finally, we implement and test the lexical model by BIBREF24 , which has been shown to be beneficial in low-data conditions. The core idea is to train a simple feed-forward network, the lexical model, jointly with the original attentional NMT model. The input of the lexical model at time step INLINEFORM0 is the weighted average of source embeddings INLINEFORM1 (the attention weights INLINEFORM2 are shared with the main model). After a feedforward layer (with skip connection), the lexical model's output INLINEFORM3 is combined with the original model's hidden state INLINEFORM4 before softmax computation. INLINEFORM5
Our implementation adds dropout and layer normalization to the lexical model.
Data and Preprocessing
We use the TED data from the IWSLT 2014 German INLINEFORM0 English shared translation task BIBREF38 . We use the same data cleanup and train/dev split as BIBREF39 , resulting in 159000 parallel sentences of training data, and 7584 for development.
As a second language pair, we evaluate our systems on a Korean–English dataset with around 90000 parallel sentences of training data, 1000 for development, and 2000 for testing.
For both PBSMT and NMT, we apply the same tokenization and truecasing using Moses scripts. For NMT, we also learn BPE subword segmentation with 30000 merge operations, shared between German and English, and independently for Korean INLINEFORM0 English.
To simulate different amounts of training resources, we randomly subsample the IWSLT training corpus 5 times, discarding half of the data at each step. Truecaser and BPE segmentation are learned on the full training corpus; as one of our experiments, we set the frequency threshold for subword units to 10 in each subcorpus (see SECREF7 ). Table TABREF14 shows statistics for each subcorpus, including the subword vocabulary.
Translation outputs are detruecased, detokenized, and compared against the reference with cased BLEU using sacreBLEU BIBREF40 , BIBREF41 . Like BIBREF39 , we report BLEU on the concatenated dev sets for IWSLT 2014 (tst2010, tst2011, tst2012, dev2010, dev2012).
PBSMT Baseline
We use Moses BIBREF42 to train a PBSMT system. We use MGIZA BIBREF43 to train word alignments, and lmplz BIBREF44 for a 5-gram LM. Feature weights are optimized on the dev set to maximize BLEU with batch MIRA BIBREF45 – we perform multiple runs where indicated. Unlike BIBREF3 , we do not use extra data for the LM. Both PBSMT and NMT can benefit from monolingual data, so the availability of monolingual data is no longer an exclusive advantage of PBSMT (see SECREF5 ).
NMT Systems
We train neural systems with Nematus BIBREF46 . Our baseline mostly follows the settings in BIBREF3 ; we use adam BIBREF47 and perform early stopping based on dev set BLEU. We express our batch size in number of tokens, and set it to 4000 in the baseline (comparable to a batch size of 80 sentences used in previous work).
We subsequently add the methods described in section SECREF3 , namely the bideep RNN, label smoothing, dropout, tied embeddings, layer normalization, changes to the BPE vocabulary size, batch size, model depth, regularization parameters and learning rate. Detailed hyperparameters are reported in Appendix SECREF7 .
Results
Table TABREF18 shows the effect of adding different methods to the baseline NMT system, on the ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words). Our "mainstream improvements" add around 6–7 BLEU in both data conditions.
In the ultra-low data condition, reducing the BPE vocabulary size is very effective (+4.9 BLEU). Reducing the batch size to 1000 token results in a BLEU gain of 0.3, and the lexical model yields an additional +0.6 BLEU. However, aggressive (word) dropout (+3.4 BLEU) and tuning other hyperparameters (+0.7 BLEU) has a stronger effect than the lexical model, and adding the lexical model (9) on top of the optimized configuration (8) does not improve performance. Together, the adaptations to the ultra-low data setting yield 9.4 BLEU (7.2 INLINEFORM2 16.6). The model trained on full IWSLT data is less sensitive to our changes (31.9 INLINEFORM3 32.8 BLEU), and optimal hyperparameters differ depending on the data condition. Subsequently, we still apply the hyperparameters that were optimized to the ultra-low data condition (8) to other data conditions, and Korean INLINEFORM4 English, for simplicity.
For a comparison with PBSMT, and across different data settings, consider Figure FIGREF19 , which shows the result of PBSMT, our NMT baseline, and our optimized NMT system. Our NMT baseline still performs worse than the PBSMT system for 3.2M words of training data, which is consistent with the results by BIBREF3 . However, our optimized NMT system shows strong improvements, and outperforms the PBSMT system across all data settings. Some sample translations are shown in Appendix SECREF8 .
For comparison to previous work, we report lowercased and tokenized results on the full IWSLT 14 training set in Table TABREF20 . Our results far outperform the RNN-based results reported by BIBREF48 , and are on par with the best reported results on this dataset.
Table TABREF21 shows results for Korean INLINEFORM0 English, using the same configurations (1, 2 and 8) as for German–English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by gu-EtAl:2018:EMNLP1.
Conclusions
Our results demonstrate that NMT is in fact a suitable choice in low-data settings, and can outperform PBSMT with far less parallel training data than previously claimed. Recently, the main trend in low-resource MT research has been the better exploitation of monolingual and multilingual resources. Our results show that low-resource NMT is very sensitive to hyperparameters such as BPE vocabulary size, word dropout, and others, and by following a set of best practices, we can train competitive NMT systems without relying on auxiliary resources. This has practical relevance for languages where large amounts of monolingual data, or multilingual data involving related languages, are not available. Even though we focused on only using parallel data, our results are also relevant for work on using auxiliary data to improve low-resource MT. Supervised systems serve as an important baseline to judge the effectiveness of semisupervised or unsupervised approaches, and the quality of supervised systems trained on little data can directly impact semi-supervised workflows, for instance for the back-translation of monolingual data.
Acknowledgments
Rico Sennrich has received funding from the Swiss National Science Foundation in the project CoNTra (grant number 105212_169888). Biao Zhang acknowledges the support of the Baidu Scholarship.
Hyperparameters
Table TABREF23 lists hyperparameters used for the different experiments in the ablation study (Table 2). Hyperparameters were kept constant across different data settings, except for the validation interval and subword vocabulary size (see Table 1).
Sample Translations
Table TABREF24 shows some sample translations that represent typical errors of our PBSMT and NMT systems, trained with ultra-low (100k words) and low (3.2M words) amounts of data. For unknown words such as blutbefleckten (`bloodstained') or Spaniern (`Spaniards', `Spanish'), PBSMT systems default to copying, while NMT systems produce translations on a subword-level, with varying success (blue-flect, bleed; spaniers, Spanians). NMT systems learn some syntactic disambiguation even with very little data, for example the translation of das and die as relative pronouns ('that', 'which', 'who'), while PBSMT produces less grammatical translation. On the flip side, the ultra low-resource NMT system ignores some unknown words in favour of a more-or-less fluent, but semantically inadequate translation: erobert ('conquered') is translated into doing, and richtig aufgezeichnet ('registered correctly', `recorded correctly') into really the first thing. | Training data with 159000, 80000, 40000, 20000, 10000 and 5000 sentences, and 7584 sentences for development |
4547818a3bbb727c4bb4a76554b5a5a7b5c5fedb | 4547818a3bbb727c4bb4a76554b5a5a7b5c5fedb_1 | Q: what amounts of size were used on german-english?
Text: Introduction
While neural machine translation (NMT) has achieved impressive performance in high-resource data conditions, becoming dominant in the field BIBREF0 , BIBREF1 , BIBREF2 , recent research has argued that these models are highly data-inefficient, and underperform phrase-based statistical machine translation (PBSMT) or unsupervised methods in low-data conditions BIBREF3 , BIBREF4 . In this paper, we re-assess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. Our main contributions are as follows:
Low-Resource Translation Quality Compared Across Systems
Figure FIGREF4 reproduces a plot by BIBREF3 which shows that their NMT system only outperforms their PBSMT system when more than 100 million words (approx. 5 million sentences) of parallel training data are available. Results shown by BIBREF4 are similar, showing that unsupervised NMT outperforms supervised systems if few parallel resources are available. In both papers, NMT systems are trained with hyperparameters that are typical for high-resource settings, and the authors did not tune hyperparameters, or change network architectures, to optimize NMT for low-resource conditions.
Improving Low-Resource Neural Machine Translation
The bulk of research on low-resource NMT has focused on exploiting monolingual data, or parallel data involving other language pairs. Methods to improve NMT with monolingual data range from the integration of a separately trained language model BIBREF5 to the training of parts of the NMT model with additional objectives, including a language modelling objective BIBREF5 , BIBREF6 , BIBREF7 , an autoencoding objective BIBREF8 , BIBREF9 , or a round-trip objective, where the model is trained to predict monolingual (target-side) training data that has been back-translated into the source language BIBREF6 , BIBREF10 , BIBREF11 . As an extreme case, models that rely exclusively on monolingual data have been shown to work BIBREF12 , BIBREF13 , BIBREF14 , BIBREF4 . Similarly, parallel data from other language pairs can be used to pre-train the network or jointly learn representations BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 .
While semi-supervised and unsupervised approaches have been shown to be very effective for some language pairs, their effectiveness depends on the availability of large amounts of suitable auxiliary data, and other conditions being met. For example, the effectiveness of unsupervised methods is impaired when languages are morphologically different, or when training domains do not match BIBREF22
More broadly, this line of research still accepts the premise that NMT models are data-inefficient and require large amounts of auxiliary data to train. In this work, we want to re-visit this point, and will focus on techniques to make more efficient use of small amounts of parallel training data. Low-resource NMT without auxiliary data has received less attention; work in this direction includes BIBREF23 , BIBREF24 .
Mainstream Improvements
We consider the hyperparameters used by BIBREF3 to be our baseline. This baseline does not make use of various advances in NMT architectures and training tricks. In contrast to the baseline, we use a BiDeep RNN architecture BIBREF25 , label smoothing BIBREF26 , dropout BIBREF27 , word dropout BIBREF28 , layer normalization BIBREF29 and tied embeddings BIBREF30 .
Language Representation
Subword representations such as BPE BIBREF31 have become a popular choice to achieve open-vocabulary translation. BPE has one hyperparameter, the number of merge operations, which determines the size of the final vocabulary. For high-resource settings, the effect of vocabulary size on translation quality is relatively small; BIBREF32 report mixed results when comparing vocabularies of 30k and 90k subwords.
In low-resource settings, large vocabularies result in low-frequency (sub)words being represented as atomic units at training time, and the ability to learn good high-dimensional representations of these is doubtful. BIBREF33 propose a minimum frequency threshold for subword units, and splitting any less frequent subword into smaller units or characters. We expect that such a threshold reduces the need to carefully tune the vocabulary size to the dataset, leading to more aggressive segmentation on smaller datasets.
Hyperparameter Tuning
Due to long training times, hyperparameters are hard to optimize by grid search, and are often re-used across experiments. However, best practices differ between high-resource and low-resource settings. While the trend in high-resource settings is towards using larger and deeper models, BIBREF24 use smaller and fewer layers for smaller datasets. Previous work has argued for larger batch sizes in NMT BIBREF35 , BIBREF36 , but we find that using smaller batches is beneficial in low-resource settings. More aggressive dropout, including dropping whole words at random BIBREF37 , is also likely to be more important. We report results on a narrow hyperparameter search guided by previous work and our own intuition.
Lexical Model
Finally, we implement and test the lexical model by BIBREF24 , which has been shown to be beneficial in low-data conditions. The core idea is to train a simple feed-forward network, the lexical model, jointly with the original attentional NMT model. The input of the lexical model at time step INLINEFORM0 is the weighted average of source embeddings INLINEFORM1 (the attention weights INLINEFORM2 are shared with the main model). After a feedforward layer (with skip connection), the lexical model's output INLINEFORM3 is combined with the original model's hidden state INLINEFORM4 before softmax computation. INLINEFORM5
Our implementation adds dropout and layer normalization to the lexical model.
Data and Preprocessing
We use the TED data from the IWSLT 2014 German INLINEFORM0 English shared translation task BIBREF38 . We use the same data cleanup and train/dev split as BIBREF39 , resulting in 159000 parallel sentences of training data, and 7584 for development.
As a second language pair, we evaluate our systems on a Korean–English dataset with around 90000 parallel sentences of training data, 1000 for development, and 2000 for testing.
For both PBSMT and NMT, we apply the same tokenization and truecasing using Moses scripts. For NMT, we also learn BPE subword segmentation with 30000 merge operations, shared between German and English, and independently for Korean INLINEFORM0 English.
To simulate different amounts of training resources, we randomly subsample the IWSLT training corpus 5 times, discarding half of the data at each step. Truecaser and BPE segmentation are learned on the full training corpus; as one of our experiments, we set the frequency threshold for subword units to 10 in each subcorpus (see SECREF7 ). Table TABREF14 shows statistics for each subcorpus, including the subword vocabulary.
Translation outputs are detruecased, detokenized, and compared against the reference with cased BLEU using sacreBLEU BIBREF40 , BIBREF41 . Like BIBREF39 , we report BLEU on the concatenated dev sets for IWSLT 2014 (tst2010, tst2011, tst2012, dev2010, dev2012).
PBSMT Baseline
We use Moses BIBREF42 to train a PBSMT system. We use MGIZA BIBREF43 to train word alignments, and lmplz BIBREF44 for a 5-gram LM. Feature weights are optimized on the dev set to maximize BLEU with batch MIRA BIBREF45 – we perform multiple runs where indicated. Unlike BIBREF3 , we do not use extra data for the LM. Both PBSMT and NMT can benefit from monolingual data, so the availability of monolingual data is no longer an exclusive advantage of PBSMT (see SECREF5 ).
NMT Systems
We train neural systems with Nematus BIBREF46 . Our baseline mostly follows the settings in BIBREF3 ; we use adam BIBREF47 and perform early stopping based on dev set BLEU. We express our batch size in number of tokens, and set it to 4000 in the baseline (comparable to a batch size of 80 sentences used in previous work).
We subsequently add the methods described in section SECREF3 , namely the bideep RNN, label smoothing, dropout, tied embeddings, layer normalization, changes to the BPE vocabulary size, batch size, model depth, regularization parameters and learning rate. Detailed hyperparameters are reported in Appendix SECREF7 .
Results
Table TABREF18 shows the effect of adding different methods to the baseline NMT system, on the ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words). Our "mainstream improvements" add around 6–7 BLEU in both data conditions.
In the ultra-low data condition, reducing the BPE vocabulary size is very effective (+4.9 BLEU). Reducing the batch size to 1000 token results in a BLEU gain of 0.3, and the lexical model yields an additional +0.6 BLEU. However, aggressive (word) dropout (+3.4 BLEU) and tuning other hyperparameters (+0.7 BLEU) has a stronger effect than the lexical model, and adding the lexical model (9) on top of the optimized configuration (8) does not improve performance. Together, the adaptations to the ultra-low data setting yield 9.4 BLEU (7.2 INLINEFORM2 16.6). The model trained on full IWSLT data is less sensitive to our changes (31.9 INLINEFORM3 32.8 BLEU), and optimal hyperparameters differ depending on the data condition. Subsequently, we still apply the hyperparameters that were optimized to the ultra-low data condition (8) to other data conditions, and Korean INLINEFORM4 English, for simplicity.
For a comparison with PBSMT, and across different data settings, consider Figure FIGREF19 , which shows the result of PBSMT, our NMT baseline, and our optimized NMT system. Our NMT baseline still performs worse than the PBSMT system for 3.2M words of training data, which is consistent with the results by BIBREF3 . However, our optimized NMT system shows strong improvements, and outperforms the PBSMT system across all data settings. Some sample translations are shown in Appendix SECREF8 .
For comparison to previous work, we report lowercased and tokenized results on the full IWSLT 14 training set in Table TABREF20 . Our results far outperform the RNN-based results reported by BIBREF48 , and are on par with the best reported results on this dataset.
Table TABREF21 shows results for Korean INLINEFORM0 English, using the same configurations (1, 2 and 8) as for German–English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by gu-EtAl:2018:EMNLP1.
Conclusions
Our results demonstrate that NMT is in fact a suitable choice in low-data settings, and can outperform PBSMT with far less parallel training data than previously claimed. Recently, the main trend in low-resource MT research has been the better exploitation of monolingual and multilingual resources. Our results show that low-resource NMT is very sensitive to hyperparameters such as BPE vocabulary size, word dropout, and others, and by following a set of best practices, we can train competitive NMT systems without relying on auxiliary resources. This has practical relevance for languages where large amounts of monolingual data, or multilingual data involving related languages, are not available. Even though we focused on only using parallel data, our results are also relevant for work on using auxiliary data to improve low-resource MT. Supervised systems serve as an important baseline to judge the effectiveness of semisupervised or unsupervised approaches, and the quality of supervised systems trained on little data can directly impact semi-supervised workflows, for instance for the back-translation of monolingual data.
Acknowledgments
Rico Sennrich has received funding from the Swiss National Science Foundation in the project CoNTra (grant number 105212_169888). Biao Zhang acknowledges the support of the Baidu Scholarship.
Hyperparameters
Table TABREF23 lists hyperparameters used for the different experiments in the ablation study (Table 2). Hyperparameters were kept constant across different data settings, except for the validation interval and subword vocabulary size (see Table 1).
Sample Translations
Table TABREF24 shows some sample translations that represent typical errors of our PBSMT and NMT systems, trained with ultra-low (100k words) and low (3.2M words) amounts of data. For unknown words such as blutbefleckten (`bloodstained') or Spaniern (`Spaniards', `Spanish'), PBSMT systems default to copying, while NMT systems produce translations on a subword-level, with varying success (blue-flect, bleed; spaniers, Spanians). NMT systems learn some syntactic disambiguation even with very little data, for example the translation of das and die as relative pronouns ('that', 'which', 'who'), while PBSMT produces less grammatical translation. On the flip side, the ultra low-resource NMT system ignores some unknown words in favour of a more-or-less fluent, but semantically inadequate translation: erobert ('conquered') is translated into doing, and richtig aufgezeichnet ('registered correctly', `recorded correctly') into really the first thing. | ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words) |
07d7652ad4a0ec92e6b44847a17c378b0d9f57f5 | 07d7652ad4a0ec92e6b44847a17c378b0d9f57f5_0 | Q: what were their experimental results in the low-resource dataset?
Text: Introduction
While neural machine translation (NMT) has achieved impressive performance in high-resource data conditions, becoming dominant in the field BIBREF0 , BIBREF1 , BIBREF2 , recent research has argued that these models are highly data-inefficient, and underperform phrase-based statistical machine translation (PBSMT) or unsupervised methods in low-data conditions BIBREF3 , BIBREF4 . In this paper, we re-assess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. Our main contributions are as follows:
Low-Resource Translation Quality Compared Across Systems
Figure FIGREF4 reproduces a plot by BIBREF3 which shows that their NMT system only outperforms their PBSMT system when more than 100 million words (approx. 5 million sentences) of parallel training data are available. Results shown by BIBREF4 are similar, showing that unsupervised NMT outperforms supervised systems if few parallel resources are available. In both papers, NMT systems are trained with hyperparameters that are typical for high-resource settings, and the authors did not tune hyperparameters, or change network architectures, to optimize NMT for low-resource conditions.
Improving Low-Resource Neural Machine Translation
The bulk of research on low-resource NMT has focused on exploiting monolingual data, or parallel data involving other language pairs. Methods to improve NMT with monolingual data range from the integration of a separately trained language model BIBREF5 to the training of parts of the NMT model with additional objectives, including a language modelling objective BIBREF5 , BIBREF6 , BIBREF7 , an autoencoding objective BIBREF8 , BIBREF9 , or a round-trip objective, where the model is trained to predict monolingual (target-side) training data that has been back-translated into the source language BIBREF6 , BIBREF10 , BIBREF11 . As an extreme case, models that rely exclusively on monolingual data have been shown to work BIBREF12 , BIBREF13 , BIBREF14 , BIBREF4 . Similarly, parallel data from other language pairs can be used to pre-train the network or jointly learn representations BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 .
While semi-supervised and unsupervised approaches have been shown to be very effective for some language pairs, their effectiveness depends on the availability of large amounts of suitable auxiliary data, and other conditions being met. For example, the effectiveness of unsupervised methods is impaired when languages are morphologically different, or when training domains do not match BIBREF22
More broadly, this line of research still accepts the premise that NMT models are data-inefficient and require large amounts of auxiliary data to train. In this work, we want to re-visit this point, and will focus on techniques to make more efficient use of small amounts of parallel training data. Low-resource NMT without auxiliary data has received less attention; work in this direction includes BIBREF23 , BIBREF24 .
Mainstream Improvements
We consider the hyperparameters used by BIBREF3 to be our baseline. This baseline does not make use of various advances in NMT architectures and training tricks. In contrast to the baseline, we use a BiDeep RNN architecture BIBREF25 , label smoothing BIBREF26 , dropout BIBREF27 , word dropout BIBREF28 , layer normalization BIBREF29 and tied embeddings BIBREF30 .
Language Representation
Subword representations such as BPE BIBREF31 have become a popular choice to achieve open-vocabulary translation. BPE has one hyperparameter, the number of merge operations, which determines the size of the final vocabulary. For high-resource settings, the effect of vocabulary size on translation quality is relatively small; BIBREF32 report mixed results when comparing vocabularies of 30k and 90k subwords.
In low-resource settings, large vocabularies result in low-frequency (sub)words being represented as atomic units at training time, and the ability to learn good high-dimensional representations of these is doubtful. BIBREF33 propose a minimum frequency threshold for subword units, and splitting any less frequent subword into smaller units or characters. We expect that such a threshold reduces the need to carefully tune the vocabulary size to the dataset, leading to more aggressive segmentation on smaller datasets.
Hyperparameter Tuning
Due to long training times, hyperparameters are hard to optimize by grid search, and are often re-used across experiments. However, best practices differ between high-resource and low-resource settings. While the trend in high-resource settings is towards using larger and deeper models, BIBREF24 use smaller and fewer layers for smaller datasets. Previous work has argued for larger batch sizes in NMT BIBREF35 , BIBREF36 , but we find that using smaller batches is beneficial in low-resource settings. More aggressive dropout, including dropping whole words at random BIBREF37 , is also likely to be more important. We report results on a narrow hyperparameter search guided by previous work and our own intuition.
Lexical Model
Finally, we implement and test the lexical model by BIBREF24 , which has been shown to be beneficial in low-data conditions. The core idea is to train a simple feed-forward network, the lexical model, jointly with the original attentional NMT model. The input of the lexical model at time step INLINEFORM0 is the weighted average of source embeddings INLINEFORM1 (the attention weights INLINEFORM2 are shared with the main model). After a feedforward layer (with skip connection), the lexical model's output INLINEFORM3 is combined with the original model's hidden state INLINEFORM4 before softmax computation. INLINEFORM5
Our implementation adds dropout and layer normalization to the lexical model.
Data and Preprocessing
We use the TED data from the IWSLT 2014 German INLINEFORM0 English shared translation task BIBREF38 . We use the same data cleanup and train/dev split as BIBREF39 , resulting in 159000 parallel sentences of training data, and 7584 for development.
As a second language pair, we evaluate our systems on a Korean–English dataset with around 90000 parallel sentences of training data, 1000 for development, and 2000 for testing.
For both PBSMT and NMT, we apply the same tokenization and truecasing using Moses scripts. For NMT, we also learn BPE subword segmentation with 30000 merge operations, shared between German and English, and independently for Korean INLINEFORM0 English.
To simulate different amounts of training resources, we randomly subsample the IWSLT training corpus 5 times, discarding half of the data at each step. Truecaser and BPE segmentation are learned on the full training corpus; as one of our experiments, we set the frequency threshold for subword units to 10 in each subcorpus (see SECREF7 ). Table TABREF14 shows statistics for each subcorpus, including the subword vocabulary.
Translation outputs are detruecased, detokenized, and compared against the reference with cased BLEU using sacreBLEU BIBREF40 , BIBREF41 . Like BIBREF39 , we report BLEU on the concatenated dev sets for IWSLT 2014 (tst2010, tst2011, tst2012, dev2010, dev2012).
PBSMT Baseline
We use Moses BIBREF42 to train a PBSMT system. We use MGIZA BIBREF43 to train word alignments, and lmplz BIBREF44 for a 5-gram LM. Feature weights are optimized on the dev set to maximize BLEU with batch MIRA BIBREF45 – we perform multiple runs where indicated. Unlike BIBREF3 , we do not use extra data for the LM. Both PBSMT and NMT can benefit from monolingual data, so the availability of monolingual data is no longer an exclusive advantage of PBSMT (see SECREF5 ).
NMT Systems
We train neural systems with Nematus BIBREF46 . Our baseline mostly follows the settings in BIBREF3 ; we use adam BIBREF47 and perform early stopping based on dev set BLEU. We express our batch size in number of tokens, and set it to 4000 in the baseline (comparable to a batch size of 80 sentences used in previous work).
We subsequently add the methods described in section SECREF3 , namely the bideep RNN, label smoothing, dropout, tied embeddings, layer normalization, changes to the BPE vocabulary size, batch size, model depth, regularization parameters and learning rate. Detailed hyperparameters are reported in Appendix SECREF7 .
Results
Table TABREF18 shows the effect of adding different methods to the baseline NMT system, on the ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words). Our "mainstream improvements" add around 6–7 BLEU in both data conditions.
In the ultra-low data condition, reducing the BPE vocabulary size is very effective (+4.9 BLEU). Reducing the batch size to 1000 token results in a BLEU gain of 0.3, and the lexical model yields an additional +0.6 BLEU. However, aggressive (word) dropout (+3.4 BLEU) and tuning other hyperparameters (+0.7 BLEU) has a stronger effect than the lexical model, and adding the lexical model (9) on top of the optimized configuration (8) does not improve performance. Together, the adaptations to the ultra-low data setting yield 9.4 BLEU (7.2 INLINEFORM2 16.6). The model trained on full IWSLT data is less sensitive to our changes (31.9 INLINEFORM3 32.8 BLEU), and optimal hyperparameters differ depending on the data condition. Subsequently, we still apply the hyperparameters that were optimized to the ultra-low data condition (8) to other data conditions, and Korean INLINEFORM4 English, for simplicity.
For a comparison with PBSMT, and across different data settings, consider Figure FIGREF19 , which shows the result of PBSMT, our NMT baseline, and our optimized NMT system. Our NMT baseline still performs worse than the PBSMT system for 3.2M words of training data, which is consistent with the results by BIBREF3 . However, our optimized NMT system shows strong improvements, and outperforms the PBSMT system across all data settings. Some sample translations are shown in Appendix SECREF8 .
For comparison to previous work, we report lowercased and tokenized results on the full IWSLT 14 training set in Table TABREF20 . Our results far outperform the RNN-based results reported by BIBREF48 , and are on par with the best reported results on this dataset.
Table TABREF21 shows results for Korean INLINEFORM0 English, using the same configurations (1, 2 and 8) as for German–English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by gu-EtAl:2018:EMNLP1.
Conclusions
Our results demonstrate that NMT is in fact a suitable choice in low-data settings, and can outperform PBSMT with far less parallel training data than previously claimed. Recently, the main trend in low-resource MT research has been the better exploitation of monolingual and multilingual resources. Our results show that low-resource NMT is very sensitive to hyperparameters such as BPE vocabulary size, word dropout, and others, and by following a set of best practices, we can train competitive NMT systems without relying on auxiliary resources. This has practical relevance for languages where large amounts of monolingual data, or multilingual data involving related languages, are not available. Even though we focused on only using parallel data, our results are also relevant for work on using auxiliary data to improve low-resource MT. Supervised systems serve as an important baseline to judge the effectiveness of semisupervised or unsupervised approaches, and the quality of supervised systems trained on little data can directly impact semi-supervised workflows, for instance for the back-translation of monolingual data.
Acknowledgments
Rico Sennrich has received funding from the Swiss National Science Foundation in the project CoNTra (grant number 105212_169888). Biao Zhang acknowledges the support of the Baidu Scholarship.
Hyperparameters
Table TABREF23 lists hyperparameters used for the different experiments in the ablation study (Table 2). Hyperparameters were kept constant across different data settings, except for the validation interval and subword vocabulary size (see Table 1).
Sample Translations
Table TABREF24 shows some sample translations that represent typical errors of our PBSMT and NMT systems, trained with ultra-low (100k words) and low (3.2M words) amounts of data. For unknown words such as blutbefleckten (`bloodstained') or Spaniern (`Spaniards', `Spanish'), PBSMT systems default to copying, while NMT systems produce translations on a subword-level, with varying success (blue-flect, bleed; spaniers, Spanians). NMT systems learn some syntactic disambiguation even with very little data, for example the translation of das and die as relative pronouns ('that', 'which', 'who'), while PBSMT produces less grammatical translation. On the flip side, the ultra low-resource NMT system ignores some unknown words in favour of a more-or-less fluent, but semantically inadequate translation: erobert ('conquered') is translated into doing, and richtig aufgezeichnet ('registered correctly', `recorded correctly') into really the first thing. | 10.37 BLEU |
9f3444c9fb2e144465d63abf58520cddd4165a01 | 9f3444c9fb2e144465d63abf58520cddd4165a01_0 | Q: what are the methods they compare with in the korean-english dataset?
Text: Introduction
While neural machine translation (NMT) has achieved impressive performance in high-resource data conditions, becoming dominant in the field BIBREF0 , BIBREF1 , BIBREF2 , recent research has argued that these models are highly data-inefficient, and underperform phrase-based statistical machine translation (PBSMT) or unsupervised methods in low-data conditions BIBREF3 , BIBREF4 . In this paper, we re-assess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. Our main contributions are as follows:
Low-Resource Translation Quality Compared Across Systems
Figure FIGREF4 reproduces a plot by BIBREF3 which shows that their NMT system only outperforms their PBSMT system when more than 100 million words (approx. 5 million sentences) of parallel training data are available. Results shown by BIBREF4 are similar, showing that unsupervised NMT outperforms supervised systems if few parallel resources are available. In both papers, NMT systems are trained with hyperparameters that are typical for high-resource settings, and the authors did not tune hyperparameters, or change network architectures, to optimize NMT for low-resource conditions.
Improving Low-Resource Neural Machine Translation
The bulk of research on low-resource NMT has focused on exploiting monolingual data, or parallel data involving other language pairs. Methods to improve NMT with monolingual data range from the integration of a separately trained language model BIBREF5 to the training of parts of the NMT model with additional objectives, including a language modelling objective BIBREF5 , BIBREF6 , BIBREF7 , an autoencoding objective BIBREF8 , BIBREF9 , or a round-trip objective, where the model is trained to predict monolingual (target-side) training data that has been back-translated into the source language BIBREF6 , BIBREF10 , BIBREF11 . As an extreme case, models that rely exclusively on monolingual data have been shown to work BIBREF12 , BIBREF13 , BIBREF14 , BIBREF4 . Similarly, parallel data from other language pairs can be used to pre-train the network or jointly learn representations BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 .
While semi-supervised and unsupervised approaches have been shown to be very effective for some language pairs, their effectiveness depends on the availability of large amounts of suitable auxiliary data, and other conditions being met. For example, the effectiveness of unsupervised methods is impaired when languages are morphologically different, or when training domains do not match BIBREF22
More broadly, this line of research still accepts the premise that NMT models are data-inefficient and require large amounts of auxiliary data to train. In this work, we want to re-visit this point, and will focus on techniques to make more efficient use of small amounts of parallel training data. Low-resource NMT without auxiliary data has received less attention; work in this direction includes BIBREF23 , BIBREF24 .
Mainstream Improvements
We consider the hyperparameters used by BIBREF3 to be our baseline. This baseline does not make use of various advances in NMT architectures and training tricks. In contrast to the baseline, we use a BiDeep RNN architecture BIBREF25 , label smoothing BIBREF26 , dropout BIBREF27 , word dropout BIBREF28 , layer normalization BIBREF29 and tied embeddings BIBREF30 .
Language Representation
Subword representations such as BPE BIBREF31 have become a popular choice to achieve open-vocabulary translation. BPE has one hyperparameter, the number of merge operations, which determines the size of the final vocabulary. For high-resource settings, the effect of vocabulary size on translation quality is relatively small; BIBREF32 report mixed results when comparing vocabularies of 30k and 90k subwords.
In low-resource settings, large vocabularies result in low-frequency (sub)words being represented as atomic units at training time, and the ability to learn good high-dimensional representations of these is doubtful. BIBREF33 propose a minimum frequency threshold for subword units, and splitting any less frequent subword into smaller units or characters. We expect that such a threshold reduces the need to carefully tune the vocabulary size to the dataset, leading to more aggressive segmentation on smaller datasets.
Hyperparameter Tuning
Due to long training times, hyperparameters are hard to optimize by grid search, and are often re-used across experiments. However, best practices differ between high-resource and low-resource settings. While the trend in high-resource settings is towards using larger and deeper models, BIBREF24 use smaller and fewer layers for smaller datasets. Previous work has argued for larger batch sizes in NMT BIBREF35 , BIBREF36 , but we find that using smaller batches is beneficial in low-resource settings. More aggressive dropout, including dropping whole words at random BIBREF37 , is also likely to be more important. We report results on a narrow hyperparameter search guided by previous work and our own intuition.
Lexical Model
Finally, we implement and test the lexical model by BIBREF24 , which has been shown to be beneficial in low-data conditions. The core idea is to train a simple feed-forward network, the lexical model, jointly with the original attentional NMT model. The input of the lexical model at time step INLINEFORM0 is the weighted average of source embeddings INLINEFORM1 (the attention weights INLINEFORM2 are shared with the main model). After a feedforward layer (with skip connection), the lexical model's output INLINEFORM3 is combined with the original model's hidden state INLINEFORM4 before softmax computation. INLINEFORM5
Our implementation adds dropout and layer normalization to the lexical model.
Data and Preprocessing
We use the TED data from the IWSLT 2014 German INLINEFORM0 English shared translation task BIBREF38 . We use the same data cleanup and train/dev split as BIBREF39 , resulting in 159000 parallel sentences of training data, and 7584 for development.
As a second language pair, we evaluate our systems on a Korean–English dataset with around 90000 parallel sentences of training data, 1000 for development, and 2000 for testing.
For both PBSMT and NMT, we apply the same tokenization and truecasing using Moses scripts. For NMT, we also learn BPE subword segmentation with 30000 merge operations, shared between German and English, and independently for Korean INLINEFORM0 English.
To simulate different amounts of training resources, we randomly subsample the IWSLT training corpus 5 times, discarding half of the data at each step. Truecaser and BPE segmentation are learned on the full training corpus; as one of our experiments, we set the frequency threshold for subword units to 10 in each subcorpus (see SECREF7 ). Table TABREF14 shows statistics for each subcorpus, including the subword vocabulary.
Translation outputs are detruecased, detokenized, and compared against the reference with cased BLEU using sacreBLEU BIBREF40 , BIBREF41 . Like BIBREF39 , we report BLEU on the concatenated dev sets for IWSLT 2014 (tst2010, tst2011, tst2012, dev2010, dev2012).
PBSMT Baseline
We use Moses BIBREF42 to train a PBSMT system. We use MGIZA BIBREF43 to train word alignments, and lmplz BIBREF44 for a 5-gram LM. Feature weights are optimized on the dev set to maximize BLEU with batch MIRA BIBREF45 – we perform multiple runs where indicated. Unlike BIBREF3 , we do not use extra data for the LM. Both PBSMT and NMT can benefit from monolingual data, so the availability of monolingual data is no longer an exclusive advantage of PBSMT (see SECREF5 ).
NMT Systems
We train neural systems with Nematus BIBREF46 . Our baseline mostly follows the settings in BIBREF3 ; we use adam BIBREF47 and perform early stopping based on dev set BLEU. We express our batch size in number of tokens, and set it to 4000 in the baseline (comparable to a batch size of 80 sentences used in previous work).
We subsequently add the methods described in section SECREF3 , namely the bideep RNN, label smoothing, dropout, tied embeddings, layer normalization, changes to the BPE vocabulary size, batch size, model depth, regularization parameters and learning rate. Detailed hyperparameters are reported in Appendix SECREF7 .
Results
Table TABREF18 shows the effect of adding different methods to the baseline NMT system, on the ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words). Our "mainstream improvements" add around 6–7 BLEU in both data conditions.
In the ultra-low data condition, reducing the BPE vocabulary size is very effective (+4.9 BLEU). Reducing the batch size to 1000 token results in a BLEU gain of 0.3, and the lexical model yields an additional +0.6 BLEU. However, aggressive (word) dropout (+3.4 BLEU) and tuning other hyperparameters (+0.7 BLEU) has a stronger effect than the lexical model, and adding the lexical model (9) on top of the optimized configuration (8) does not improve performance. Together, the adaptations to the ultra-low data setting yield 9.4 BLEU (7.2 INLINEFORM2 16.6). The model trained on full IWSLT data is less sensitive to our changes (31.9 INLINEFORM3 32.8 BLEU), and optimal hyperparameters differ depending on the data condition. Subsequently, we still apply the hyperparameters that were optimized to the ultra-low data condition (8) to other data conditions, and Korean INLINEFORM4 English, for simplicity.
For a comparison with PBSMT, and across different data settings, consider Figure FIGREF19 , which shows the result of PBSMT, our NMT baseline, and our optimized NMT system. Our NMT baseline still performs worse than the PBSMT system for 3.2M words of training data, which is consistent with the results by BIBREF3 . However, our optimized NMT system shows strong improvements, and outperforms the PBSMT system across all data settings. Some sample translations are shown in Appendix SECREF8 .
For comparison to previous work, we report lowercased and tokenized results on the full IWSLT 14 training set in Table TABREF20 . Our results far outperform the RNN-based results reported by BIBREF48 , and are on par with the best reported results on this dataset.
Table TABREF21 shows results for Korean INLINEFORM0 English, using the same configurations (1, 2 and 8) as for German–English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by gu-EtAl:2018:EMNLP1.
Conclusions
Our results demonstrate that NMT is in fact a suitable choice in low-data settings, and can outperform PBSMT with far less parallel training data than previously claimed. Recently, the main trend in low-resource MT research has been the better exploitation of monolingual and multilingual resources. Our results show that low-resource NMT is very sensitive to hyperparameters such as BPE vocabulary size, word dropout, and others, and by following a set of best practices, we can train competitive NMT systems without relying on auxiliary resources. This has practical relevance for languages where large amounts of monolingual data, or multilingual data involving related languages, are not available. Even though we focused on only using parallel data, our results are also relevant for work on using auxiliary data to improve low-resource MT. Supervised systems serve as an important baseline to judge the effectiveness of semisupervised or unsupervised approaches, and the quality of supervised systems trained on little data can directly impact semi-supervised workflows, for instance for the back-translation of monolingual data.
Acknowledgments
Rico Sennrich has received funding from the Swiss National Science Foundation in the project CoNTra (grant number 105212_169888). Biao Zhang acknowledges the support of the Baidu Scholarship.
Hyperparameters
Table TABREF23 lists hyperparameters used for the different experiments in the ablation study (Table 2). Hyperparameters were kept constant across different data settings, except for the validation interval and subword vocabulary size (see Table 1).
Sample Translations
Table TABREF24 shows some sample translations that represent typical errors of our PBSMT and NMT systems, trained with ultra-low (100k words) and low (3.2M words) amounts of data. For unknown words such as blutbefleckten (`bloodstained') or Spaniern (`Spaniards', `Spanish'), PBSMT systems default to copying, while NMT systems produce translations on a subword-level, with varying success (blue-flect, bleed; spaniers, Spanians). NMT systems learn some syntactic disambiguation even with very little data, for example the translation of das and die as relative pronouns ('that', 'which', 'who'), while PBSMT produces less grammatical translation. On the flip side, the ultra low-resource NMT system ignores some unknown words in favour of a more-or-less fluent, but semantically inadequate translation: erobert ('conquered') is translated into doing, and richtig aufgezeichnet ('registered correctly', `recorded correctly') into really the first thing. | gu-EtAl:2018:EMNLP1 |
2348d68e065443f701d8052018c18daa4ecc120e | 2348d68e065443f701d8052018c18daa4ecc120e_0 | Q: what pitfalls are mentioned in the paper?
Text: Introduction
While neural machine translation (NMT) has achieved impressive performance in high-resource data conditions, becoming dominant in the field BIBREF0 , BIBREF1 , BIBREF2 , recent research has argued that these models are highly data-inefficient, and underperform phrase-based statistical machine translation (PBSMT) or unsupervised methods in low-data conditions BIBREF3 , BIBREF4 . In this paper, we re-assess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. Our main contributions are as follows:
Low-Resource Translation Quality Compared Across Systems
Figure FIGREF4 reproduces a plot by BIBREF3 which shows that their NMT system only outperforms their PBSMT system when more than 100 million words (approx. 5 million sentences) of parallel training data are available. Results shown by BIBREF4 are similar, showing that unsupervised NMT outperforms supervised systems if few parallel resources are available. In both papers, NMT systems are trained with hyperparameters that are typical for high-resource settings, and the authors did not tune hyperparameters, or change network architectures, to optimize NMT for low-resource conditions.
Improving Low-Resource Neural Machine Translation
The bulk of research on low-resource NMT has focused on exploiting monolingual data, or parallel data involving other language pairs. Methods to improve NMT with monolingual data range from the integration of a separately trained language model BIBREF5 to the training of parts of the NMT model with additional objectives, including a language modelling objective BIBREF5 , BIBREF6 , BIBREF7 , an autoencoding objective BIBREF8 , BIBREF9 , or a round-trip objective, where the model is trained to predict monolingual (target-side) training data that has been back-translated into the source language BIBREF6 , BIBREF10 , BIBREF11 . As an extreme case, models that rely exclusively on monolingual data have been shown to work BIBREF12 , BIBREF13 , BIBREF14 , BIBREF4 . Similarly, parallel data from other language pairs can be used to pre-train the network or jointly learn representations BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 .
While semi-supervised and unsupervised approaches have been shown to be very effective for some language pairs, their effectiveness depends on the availability of large amounts of suitable auxiliary data, and other conditions being met. For example, the effectiveness of unsupervised methods is impaired when languages are morphologically different, or when training domains do not match BIBREF22
More broadly, this line of research still accepts the premise that NMT models are data-inefficient and require large amounts of auxiliary data to train. In this work, we want to re-visit this point, and will focus on techniques to make more efficient use of small amounts of parallel training data. Low-resource NMT without auxiliary data has received less attention; work in this direction includes BIBREF23 , BIBREF24 .
Mainstream Improvements
We consider the hyperparameters used by BIBREF3 to be our baseline. This baseline does not make use of various advances in NMT architectures and training tricks. In contrast to the baseline, we use a BiDeep RNN architecture BIBREF25 , label smoothing BIBREF26 , dropout BIBREF27 , word dropout BIBREF28 , layer normalization BIBREF29 and tied embeddings BIBREF30 .
Language Representation
Subword representations such as BPE BIBREF31 have become a popular choice to achieve open-vocabulary translation. BPE has one hyperparameter, the number of merge operations, which determines the size of the final vocabulary. For high-resource settings, the effect of vocabulary size on translation quality is relatively small; BIBREF32 report mixed results when comparing vocabularies of 30k and 90k subwords.
In low-resource settings, large vocabularies result in low-frequency (sub)words being represented as atomic units at training time, and the ability to learn good high-dimensional representations of these is doubtful. BIBREF33 propose a minimum frequency threshold for subword units, and splitting any less frequent subword into smaller units or characters. We expect that such a threshold reduces the need to carefully tune the vocabulary size to the dataset, leading to more aggressive segmentation on smaller datasets.
Hyperparameter Tuning
Due to long training times, hyperparameters are hard to optimize by grid search, and are often re-used across experiments. However, best practices differ between high-resource and low-resource settings. While the trend in high-resource settings is towards using larger and deeper models, BIBREF24 use smaller and fewer layers for smaller datasets. Previous work has argued for larger batch sizes in NMT BIBREF35 , BIBREF36 , but we find that using smaller batches is beneficial in low-resource settings. More aggressive dropout, including dropping whole words at random BIBREF37 , is also likely to be more important. We report results on a narrow hyperparameter search guided by previous work and our own intuition.
Lexical Model
Finally, we implement and test the lexical model by BIBREF24 , which has been shown to be beneficial in low-data conditions. The core idea is to train a simple feed-forward network, the lexical model, jointly with the original attentional NMT model. The input of the lexical model at time step INLINEFORM0 is the weighted average of source embeddings INLINEFORM1 (the attention weights INLINEFORM2 are shared with the main model). After a feedforward layer (with skip connection), the lexical model's output INLINEFORM3 is combined with the original model's hidden state INLINEFORM4 before softmax computation. INLINEFORM5
Our implementation adds dropout and layer normalization to the lexical model.
Data and Preprocessing
We use the TED data from the IWSLT 2014 German INLINEFORM0 English shared translation task BIBREF38 . We use the same data cleanup and train/dev split as BIBREF39 , resulting in 159000 parallel sentences of training data, and 7584 for development.
As a second language pair, we evaluate our systems on a Korean–English dataset with around 90000 parallel sentences of training data, 1000 for development, and 2000 for testing.
For both PBSMT and NMT, we apply the same tokenization and truecasing using Moses scripts. For NMT, we also learn BPE subword segmentation with 30000 merge operations, shared between German and English, and independently for Korean INLINEFORM0 English.
To simulate different amounts of training resources, we randomly subsample the IWSLT training corpus 5 times, discarding half of the data at each step. Truecaser and BPE segmentation are learned on the full training corpus; as one of our experiments, we set the frequency threshold for subword units to 10 in each subcorpus (see SECREF7 ). Table TABREF14 shows statistics for each subcorpus, including the subword vocabulary.
Translation outputs are detruecased, detokenized, and compared against the reference with cased BLEU using sacreBLEU BIBREF40 , BIBREF41 . Like BIBREF39 , we report BLEU on the concatenated dev sets for IWSLT 2014 (tst2010, tst2011, tst2012, dev2010, dev2012).
PBSMT Baseline
We use Moses BIBREF42 to train a PBSMT system. We use MGIZA BIBREF43 to train word alignments, and lmplz BIBREF44 for a 5-gram LM. Feature weights are optimized on the dev set to maximize BLEU with batch MIRA BIBREF45 – we perform multiple runs where indicated. Unlike BIBREF3 , we do not use extra data for the LM. Both PBSMT and NMT can benefit from monolingual data, so the availability of monolingual data is no longer an exclusive advantage of PBSMT (see SECREF5 ).
NMT Systems
We train neural systems with Nematus BIBREF46 . Our baseline mostly follows the settings in BIBREF3 ; we use adam BIBREF47 and perform early stopping based on dev set BLEU. We express our batch size in number of tokens, and set it to 4000 in the baseline (comparable to a batch size of 80 sentences used in previous work).
We subsequently add the methods described in section SECREF3 , namely the bideep RNN, label smoothing, dropout, tied embeddings, layer normalization, changes to the BPE vocabulary size, batch size, model depth, regularization parameters and learning rate. Detailed hyperparameters are reported in Appendix SECREF7 .
Results
Table TABREF18 shows the effect of adding different methods to the baseline NMT system, on the ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words). Our "mainstream improvements" add around 6–7 BLEU in both data conditions.
In the ultra-low data condition, reducing the BPE vocabulary size is very effective (+4.9 BLEU). Reducing the batch size to 1000 token results in a BLEU gain of 0.3, and the lexical model yields an additional +0.6 BLEU. However, aggressive (word) dropout (+3.4 BLEU) and tuning other hyperparameters (+0.7 BLEU) has a stronger effect than the lexical model, and adding the lexical model (9) on top of the optimized configuration (8) does not improve performance. Together, the adaptations to the ultra-low data setting yield 9.4 BLEU (7.2 INLINEFORM2 16.6). The model trained on full IWSLT data is less sensitive to our changes (31.9 INLINEFORM3 32.8 BLEU), and optimal hyperparameters differ depending on the data condition. Subsequently, we still apply the hyperparameters that were optimized to the ultra-low data condition (8) to other data conditions, and Korean INLINEFORM4 English, for simplicity.
For a comparison with PBSMT, and across different data settings, consider Figure FIGREF19 , which shows the result of PBSMT, our NMT baseline, and our optimized NMT system. Our NMT baseline still performs worse than the PBSMT system for 3.2M words of training data, which is consistent with the results by BIBREF3 . However, our optimized NMT system shows strong improvements, and outperforms the PBSMT system across all data settings. Some sample translations are shown in Appendix SECREF8 .
For comparison to previous work, we report lowercased and tokenized results on the full IWSLT 14 training set in Table TABREF20 . Our results far outperform the RNN-based results reported by BIBREF48 , and are on par with the best reported results on this dataset.
Table TABREF21 shows results for Korean INLINEFORM0 English, using the same configurations (1, 2 and 8) as for German–English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by gu-EtAl:2018:EMNLP1.
Conclusions
Our results demonstrate that NMT is in fact a suitable choice in low-data settings, and can outperform PBSMT with far less parallel training data than previously claimed. Recently, the main trend in low-resource MT research has been the better exploitation of monolingual and multilingual resources. Our results show that low-resource NMT is very sensitive to hyperparameters such as BPE vocabulary size, word dropout, and others, and by following a set of best practices, we can train competitive NMT systems without relying on auxiliary resources. This has practical relevance for languages where large amounts of monolingual data, or multilingual data involving related languages, are not available. Even though we focused on only using parallel data, our results are also relevant for work on using auxiliary data to improve low-resource MT. Supervised systems serve as an important baseline to judge the effectiveness of semisupervised or unsupervised approaches, and the quality of supervised systems trained on little data can directly impact semi-supervised workflows, for instance for the back-translation of monolingual data.
Acknowledgments
Rico Sennrich has received funding from the Swiss National Science Foundation in the project CoNTra (grant number 105212_169888). Biao Zhang acknowledges the support of the Baidu Scholarship.
Hyperparameters
Table TABREF23 lists hyperparameters used for the different experiments in the ablation study (Table 2). Hyperparameters were kept constant across different data settings, except for the validation interval and subword vocabulary size (see Table 1).
Sample Translations
Table TABREF24 shows some sample translations that represent typical errors of our PBSMT and NMT systems, trained with ultra-low (100k words) and low (3.2M words) amounts of data. For unknown words such as blutbefleckten (`bloodstained') or Spaniern (`Spaniards', `Spanish'), PBSMT systems default to copying, while NMT systems produce translations on a subword-level, with varying success (blue-flect, bleed; spaniers, Spanians). NMT systems learn some syntactic disambiguation even with very little data, for example the translation of das and die as relative pronouns ('that', 'which', 'who'), while PBSMT produces less grammatical translation. On the flip side, the ultra low-resource NMT system ignores some unknown words in favour of a more-or-less fluent, but semantically inadequate translation: erobert ('conquered') is translated into doing, and richtig aufgezeichnet ('registered correctly', `recorded correctly') into really the first thing. | highly data-inefficient, underperform phrase-based statistical machine translation |
5679fabeadf680e35a4f7b092d39e8638dca6b4d | 5679fabeadf680e35a4f7b092d39e8638dca6b4d_0 | Q: Does the paper report the results of previous models applied to the same tasks?
Text: Introduction ::: Background
Over the past two decades, the rise of social media and the digitization of news and discussion platforms have radically transformed how individuals and groups create, process and share news and information. As Alan Rusbridger, former-editor-in-chief of the newspaper The Guardian has it, these technologically-driven shifts in the ways people communicate, organize themselves and express their beliefs and opinions, have
empower[ed] those that were never heard, creating a a new form of politics and turning traditional news corporations inside out. It is impossible to think of Donald Trump; of Brexit; of Bernie Sanders; of Podemos; of the growth of the far right in Europe; of the spasms of hope and violent despair in the Middle East and North Africa without thinking also of the total inversion of how news is created, shared and distributed. Much of it is liberating and and inspiring. Some of it is ugly and dark. And something - the centuries-old craft of journalism - is in danger of being lost BIBREF0.
Rusbridger's observation that the present media-ecology puts traditional notions of politics, journalism, trust and truth at stake is a widely shared one BIBREF1, BIBREF2, BIBREF3. As such, it has sparked interdisciplinary investigations, diagnoses and ideas for remedies across the economical, socio-political, and technological spectrum, challenging our existing assumptions and epistemologies BIBREF4, BIBREF5. Among these lines of inquiry, particular strands of research from the computational social sciences are addressing pressing questions of how emerging technologies and digital methods might be operationalized to regain a grip on the dynamics that govern the flow of on-line news and its associated multitudes of voices, opinions and conflicts. Could the information circulating on on-line (social) news platforms for instance be mined to better understand and analyze the problems facing our contemporary society? Might such data mining and analysis help us to monitor the growing number of social conflicts and crises due to cultural differences and diverging world-views? And finally, would such an approach potentially facilitate early detection of conflicts and even ways to resolve them before they turn violent?
Answering these questions requires further advances in the study of cultural conflict based on digital media data. This includes the development of fine-grained representations of cultural conflict based on theoretically-informed text analysis, the integration of game-theoretical approaches to models of polarization and alignment, as well as the construction of accessible tools and media-monitoring observatories: platforms that foster insight into the complexities of social behaviour and opinion dynamics through automated computational analyses of (social) media data. Through an interdisciplinary approach, the present article aims to make both a practical and theoretical contribution to these aspects of the study of opinion dynamics and conflict in new media environments.
Introduction ::: Objective
The objective of the present article is to critically examine possibilities and limitations of machine-guided exploration and potential facilitation of on-line opinion dynamics on the basis of an experimental data analytics pipeline or observatory for mining and analyzing climate change-related user comments from the news website of The Guardian (TheGuardian.com). Combining insights from the social and political sciences with computational methods for the linguistic analysis of texts, this observatory provides a series of spatial (network) representations of the opinion landscapes on climate change on the basis of causation frames expressed in news website comments. This allows for the exploration of opinion spaces at different levels of detail and aggregation.
Technical and theoretical questions related to the proposed method and infrastructure for the exploration and facilitation of debates will be discussed in three sections. The first section concerns notions of how to define what constitutes a belief or opinion and how these can be mined from texts. To this end, an approach based on the automated extraction of semantic frames expressing causation is proposed. The observatory thus builds on the theoretical premise that expressions of causation such as `global warming causes rises in sea levels' can be revelatory for a person or group's underlying belief systems. Through a further technical description of the observatory's data-analytical components, section two of the paper deals with matters of spatially modelling the output of the semantic frame extractor and how this might be achieved without sacrificing nuances of meaning. The final section of the paper, then, discusses how insights gained from technologically observing opinion dynamics can inform conceptual modelling efforts and approaches to on-line opinion facilitation. As such, the paper brings into view and critically evaluates the fundamental conceptual leap from machine-guided observation to debate facilitation and intervention.
Through the case examples from The Guardian's website and the theoretical discussions explored in these sections, the paper intends to make a twofold contribution to the fields of media studies, opinion dynamics and computational social science. Firstly, the paper introduces and chains together a number of data analytics components for social media monitoring (and facilitation) that were developed in the context of the <project name anonymized for review> infrastructure project. The <project name anonymized for review> infrastructure makes the components discussed in this paper available as open web services in order to foster reproducibility and further experimentation and development <infrastructure reference URL anonymized for review>. Secondly, and supplementing these technological and methodological gains, the paper addresses a number of theoretical, epistemological and ethical questions that are raised by experimental approaches to opinion exploration and facilitation. This notably includes methodological questions on the preservation of meaning through text and data mining, as well as the role of human interpretation, responsibility and incentivisation in observing and potentially facilitating opinion dynamics.
Introduction ::: Data: the communicative setting of TheGuardian.com
In order to study on-line opinion dynamics and build the corresponding climate change opinion observatory discussed in this paper, a corpus of climate-change related news articles and news website comments was analyzed. Concretely, articles from the ‘climate change’ subsection from the news website of The Guardian dated from 2009 up to April 2019 were processed, along with up to 200 comments and associated metadata for articles where commenting was enabled at the time of publication. The choice for studying opinion dynamics using data from The Guardian is motivated by this news website's prominent position in the media landscape as well as its communicative setting, which is geared towards user engagement. Through this interaction with readers, the news platform embodies many of the recent shifts that characterize our present-day media ecology.
TheGuardian.com is generally acknowledged to be one of the UK's leading online newspapers, with 8,2 million unique visitors per month as of May 2013 BIBREF6. The website consists of a core news site, as well as a range of subsections that allow for further classification and navigation of articles. Articles related to climate change can for instance be accessed by navigating through the `News' section, over the subsection `environment', to the subsubsection `climate change' BIBREF7. All articles on the website can be read free of charge, as The Guardian relies on a business model that combines revenues from advertising, voluntary donations and paid subscriptions.
Apart from offering high-quality, independent journalism on a range of topics, a distinguishing characteristic of The Guardian is its penchant for reader involvement and engagement. Adopting to the changing media landscape and appropriating business models that fit the transition from print to on-line news media, the Guardian has transformed itself into a platform that enables forms of citizen journalism, blogging, and welcomes readers comments on news articles BIBREF0. In order for a reader to comment on articles, it is required that a user account is made, which provides a user with a unique user name and a user profile page with a stable URL. According to the website's help pages, providing users with an identity that is consistently recognized by the community fosters proper on-line community behaviour BIBREF8. Registered users can post comments on content that is open to commenting, and these comments are moderated by a dedicated moderation team according to The Guardian's community standards and participation guidelines BIBREF9. In support of digital methods and innovative approaches to journalism and data mining, The Guardian has launched an open API (application programming interface) through which developers can access different types of content BIBREF10. It should be noted that at the moment of writing this article, readers' comments are not accessible through this API. For the scientific and educational purposes of this paper, comments were thus consulted using a dedicated scraper.
Taking into account this community and technologically-driven orientation, the communicative setting of The Guardian from which opinions are to be mined and the underlying belief system revealed, is defined by articles, participating commenters and comment spheres (that is, the actual comments aggregated by user, individual article or collection of articles) (see Figure FIGREF4).
In this setting, articles (and previous comments on those articles) can be commented on by participating commenters, each of which bring to the debate his or her own opinions or belief system. What this belief system might consists of can be inferred on a number of levels, with varying degrees of precision. On the most general level, a generic description of the profile of the average reader of The Guardian can be informative. Such profiles have been compiled by market researchers with the purpose of informing advertisers about the demographic that might be reached through this news website (and other products carrying The Guardian's brand). As of the writing of this article, the audience The Guardian is presented to advertisers as a `progressive' audience:
Living in a world of unprecedented societal change, with the public narratives around politics, gender, body image, sexuality and diet all being challenged. The Guardian is committed to reflecting the progressive agenda, and reaching the crowd that uphold those values. It’s helpful that we reach over half of progressives in the UK BIBREF11.
A second, equally high-level indicator of the beliefs that might be present on the platform, are the links through which articles on climate change can be accessed. An article on climate change might for instance be consulted through the environment section of the news website, but also through the business section. Assuming that business interests might potentially be at odds with environmental concerns, it could be hypothesized that the particular comment sphere for that article consists of at least two potentially clashing frames of mind or belief systems.
However, as will be expanded upon further in this article, truly capturing opinion dynamics requires a more systemic and fine-grained approach. The present article therefore proposes a method for harvesting opinions from the actual comment texts. The presupposition is thereby that comment spheres are marked by a diversity of potentially related opinions and beliefs. Opinions might for instance be connected through the reply structure that marks the comment section of an article, but this connection might also manifest itself on a semantic level (that is, the level of meaning or the actual contents of the comments). To capture this multidimensional, interconnected nature of the comment spheres, it is proposed to represent comment spheres as networks, where the nodes represent opinions and beliefs, and edges the relationships between these beliefs (see the spatial representation of beliefs infra). The use of precision language tools to extract such beliefs and their mutual relationships, as will be explored in the following sections, can open up new pathways of model validation and creation.
Mining opinions and beliefs from texts
In traditional experimental settings, survey techniques and associated statistical models provide researchers with established methods to gauge and analyze the opinions of a population. When studying opinion landscapes through on-line social media, however, harvesting beliefs from big textual data such as news website comments and developing or appropriating models for their analysis is a non-trivial task BIBREF12, BIBREF13, BIBREF14.
In the present context, two challenges related to data-gathering and text mining need to be addressed: (1) defining what constitutes an expression of an opinion or belief, and (2) associating this definition with a pattern that might be extracted from texts. Recent scholarship in the fields of natural language processing (NLP) and argumentation mining has yielded a range of instruments and methods for the (automatic) identification of argumentative claims in texts BIBREF15, BIBREF16. Adding to these instruments and methods, the present article proposes an approach in which belief systems or opinions on climate change are accessed through expressions of causation.
Mining opinions and beliefs from texts ::: Causal mapping methods and the climate change debate
The climate change debate is often characterized by expressions of causation, that is, expressions linking a certain cause with a certain effect. Cultural or societal clashes on climate change might for instance concern diverging assessments of whether global warming is man-made or not BIBREF17. Based on such examples, it can be stated that expressions of causation are closely associated with opinions or beliefs, and that as such, these expressions can be considered a valuable indicator for the range and diversity of the opinions and beliefs that constitute the climate change debate. The observatory under discussion therefore focuses on the extraction and analysis of linguistic patterns called causation frames. As will be further demonstrated in this section, the benefit of this causation-based approach is that it offers a systemic approach to opinion dynamics that comprises different layers of meaning, notably the cognitive or social meaningfulness of patterns on account of their being expressions of causation, as well as further lexical and semantic information that might be used for analysis and comparison.
The study of expressions of causation as a method for accessing and assessing belief systems and opinions has been formalized and streamlined since the 1970s. Pioneered by political scientist Robert Axelrod and others, this causal mapping method (also referred to as `cognitive mapping') was introduced as a means of reconstructing and evaluating administrative and political decision-making processes, based on the principle that
the notion of causation is vital to the process of evaluating alternatives. Regardless of philosophical difficulties involved in the meaning of causation, people do evaluate complex policy alternatives in terms of the consequences a particular choice would cause, and ultimately of what the sum of these effects would be. Indeed, such causal analysis is built into our language, and it would be very difficult for us to think completely in other terms, even if we tried BIBREF18.
Axelrod's causal mapping method comprises a set of conventions to graphically represent networks of causes and effects (the nodes in a network) as well as the qualitative aspects of this relation (the network’s directed edges, notably assertions of whether the causal linkage is positive or negative). These causes and effects are to be extracted from relevant sources by means of a series of heuristics and an encoding scheme (it should be noted that for this task Axelrod had human readers in mind). The graphs resulting from these efforts provide a structural overview of the relations among causal assertions (and thus beliefs):
The basic elements of the proposed system are quite simple. The concepts a person uses are represented as points, and the causal links between these concepts are represented as arrows between these points. This gives a pictorial representation of the causal assertions of a person as a graph of points and arrows. This kind of representation of assertions as a graph will be called a cognitive map. The policy alternatives, all of the various causes and effects, the goals, and the ultimate utility of the decision maker can all be thought of as concept variables, and represented as points in the cognitive map. The real power of this approach appears when a cognitive map is pictured in graph form; it is then relatively easy to see how each of the concepts and causal relationships relate to each other, and to see the overall structure of the whole set of portrayed assertions BIBREF18.
In order to construct these cognitive maps based on textual information, Margaret Tucker Wrightson provides a set of reading and coding rules for extracting cause concepts, linkages (relations) and effect concepts from expressions in the English language. The assertion `Our present topic is the militarism of Germany, which is maintaining a state of tension in the Baltic Area' might for instance be encoded as follows: `the militarism of Germany' (cause concept), /+/ (a positive relationship), `maintaining a state of tension in the Baltic area' (effect concept) BIBREF19. Emphasizing the role of human interpretation, it is acknowledged that no strict set of rules can capture the entire spectrum of causal assertions:
The fact that the English language is as varied as those who use it makes the coder's task complex and difficult. No set of rules will completely solve the problems he or she might encounter. These rules, however, provide the coder with guidelines which, if conscientiously followed, will result in outcomes meeting social scientific standards of comparative validity and reliability BIBREF19.
To facilitate the task of encoders, the causal mapping method has gone through various iterations since its original inception, all the while preserving its original premises. Recent software packages have for instance been devised to support the data encoding and drawing process BIBREF20. As such, causal or cognitive mapping has become an established opinion and decision mining method within political science, business and management, and other domains. It has notably proven to be a valuable method for the study of recent societal and cultural conflicts. Thomas Homer-Dixon et al. for instance rely on cognitive-affective maps created from survey data to analyze interpretations of the housing crisis in Germany, Israeli attitudes toward the Western Wall, and moderate versus skeptical positions on climate change BIBREF21. Similarly, Duncan Shaw et al. venture to answer the question of `Why did Brexit happen?' by building causal maps of nine televised debates that were broadcast during the four weeks leading up to the Brexit referendum BIBREF22.
In order to appropriate the method of causal mapping to the study of on-line opinion dynamics, it needs to expanded from applications at the scale of human readers and relatively small corpora of archival documents and survey answers, to the realm of `big' textual data and larger quantities of information. This attuning of cognitive mapping methods to the large-scale processing of texts required for media monitoring necessarily involves a degree of automation, as will be explored in the next section.
Mining opinions and beliefs from texts ::: Automated causation tracking with the Penelope semantic frame extractor
As outlined in the previous section, causal mapping is based on the extraction of so-called cause concepts, (causal) relations, and effect concepts from texts. The complexity of each of these these concepts can range from the relatively simple (as illustrated by the easily-identifiable cause and effect relation in the example of `German militarism' cited earlier), to more complex assertions such as `The development of international cooperation in all fields across the ideological frontiers will gradually remove the hostility and fear that poison international relations', which contains two effect concepts (viz. `the hostility that poisons international relations' and `the fear that poisons international relations'). As such, this statement would have to be encoded as a double relationship BIBREF19.
The coding guidelines in BIBREF19 further reflect that extracting cause and effect concepts from texts is an operation that works on both the syntactical and semantic levels of assertions. This can be illustrated by means of the guidelines for analyzing the aforementioned causal assertion on German militarism:
1. The first step is the realization of the relationship. Does a subject affect an object? 2. Having recognized that it does, the isolation of the cause and effects concepts is the second step. As the sentence structure indicates, "the militarism of Germany" is the causal concept, because it is the initiator of the action, while the direct object clause, "a state of tension in the Baltic area," constitutes that which is somehow influenced, the effect concept BIBREF19.
In the field of computational linguistics, from which the present paper borrows part of its methods, this procedure for extracting information related to causal assertions from texts can be considered an instance of an operation called semantic frame extraction BIBREF23. A semantic frame captures a coherent part of the meaning of a sentence in a structured way. As documented in the FrameNet project BIBREF24, the Causation frame is defined as follows:
A Cause causes an Effect. Alternatively, an Actor, a participant of a (implicit) Cause, may stand in for the Cause. The entity Affected by the Causation may stand in for the overall Effect situation or event BIBREF25.
In a linguistic utterance such as a statement in a news website comment, the Causation frame can be evoked by a series of lexical units, such as `cause', `bring on', etc. In the example `If such a small earthquake CAUSES problems, just imagine a big one!', the Causation frame is triggered by the verb `causes', which therefore is called the frame evoking element. The Cause slot is filled by `a small earthquake', the Effect slot by `problems' BIBREF25.
In order to automatically mine cause and effects concepts from the corpus of comments on The Guardian, the present paper uses the Penelope semantic frame extractor: a tool that exploits the fact that semantic frames can be expressed as form-meaning mappings called constructions. Notably, frames were extracted from Guardian comments by focusing on the following lexical units (verbs, prepositions and conjunctions), listed in FrameNet as frame evoking elements of the Causation frame: Cause.v, Due to.prep, Because.c, Because of.prep, Give rise to.v, Lead to.v or Result in.v.
As illustrated by the following examples, the strings output by the semantic frame extractor adhere closely to the original utterance, preserving all of the the comments' causation frames real-world noisiness:
The output of the semantic frame extractor as such is used as the input for the ensuing pipeline components in the climate change opinion observatory. The aim of a further analysis of these frames is to find patterns in the beliefs and opinions they express. As will be discussed in the following section, which focuses on applications and cases, maintaining semantic nuances in this further analytic process foregrounds the role of models and aggregation levels.
Analyses and applications
Based on the presupposition that relations between causation frames reveal beliefs, the output of the semantic frame extractor creates various opportunities for exploring opinion landscapes and empirically validating conceptual models for opinion dynamics.
In general, any alignment of conceptual models and real-world data is an exercise in compromising, as the idealized, abstract nature of models is likely to be at odds with the messiness of the actual data. Finding such a compromise might for instance involve a reduction of the simplicity or elegance of the model, or, on the other hand, an increased aggregation (and thus reduced granularity) of the data.
Addressing this challenge, the current section reflects on questions of data modelling, aggregation and meaning by exploring, through case examples, different spatial representations of opinion landscapes mined from the TheGuardian.com's comment sphere. These spatial renditions will be understood as network visualizations in which nodes represent argumentative statements (beliefs) and edges the degree of similarity between these statements. On the most general level, then, such a representation can consists of an overview of all the causes expressed in the corpus of climate change-related Guardian comments. This type of visualization provides a birds-eye view of the entire opinion landscape as mined from the comment texts. In turn, such a general overview might elicit more fine-grained, micro-level investigations, in which a particular cause is singled out and its more specific associated effects are mapped. These macro and micro level overviews come with their own proper potential for theory building and evaluation, as well as distinct requirements for the depth or detail of meaning that needs to be represented. To get the most general sense of an opinion landscape one might for instance be more tolerant of abstract renditions of beliefs (e.g. by reducing statements to their most frequently used terms), but for more fine-grained analysis one requires more context and nuance (e.g. adhering as closely as possible to the original comment).
Analyses and applications ::: Aggregation
As follows from the above, one of the most fundamental questions when building automated tools to observe opinion dynamics that potentially aim at advising means of debate facilitation concerns the level of meaning aggregation. A clear argumentative or causal association between, for instance, climate change and catastrophic events such as floods or hurricanes may become detectable by automatic causal frame tracking at the scale of large collections of articles where this association might appear statistically more often, but detection comes with great challenges when the aim is to classify certain sets of only a few statements in more free expression environments such as comment spheres.
In other words, the problem of meaning aggregation is closely related to issues of scale and aggregation over utterances. The more fine-grained the semantic resolution is, that is, the more specific the cause or effect is that one is interested in, the less probable it is to observe the same statement twice. Moreover, with every independent variable (such as time, different commenters or user groups, etc.), less data on which fine-grained opinion statements are to be detected is available. In the present case of parsed comments from TheGuardian.com, providing insights into the belief system of individual commenters, even if all their statements are aggregated over time, relies on a relatively small set of argumentative statements. This relative sparseness is in part due to the fact that the scope of the semantic frame extractor is confined to the frame evoking elements listed earlier, thus omitting more implicit assertions of causation (i.e. expressions of causation that can only be derived from context and from reading between the lines).
Similarly, as will be explored in the ensuing paragraphs, matters of scale and aggregation determine the types of further linguistic analyses that can be performed on the output of the frame extractor. Within the field of computational linguistics, various techniques have been developed to represent the meaning of words as vectors that capture the contexts in which these words are typically used. Such analyses might reveal patterns of statistical significance, but it is also likely that in creating novel, numerical representations of the original utterances, the semantic structure of argumentatively linked beliefs is lost.
In sum, developing opinion observatories and (potential) debate facilitators entails finding a trade-off, or, in fact, a middle way between macro- and micro-level analyses. On the one hand, one needs to leverage automated analysis methods at the scale of larger collections to maximum advantage. But one also needs to integrate opportunities to interactively zoom into specific aspects of interest and provide more fine-grained information at these levels down to the actual statements. This interplay between macro- and micro-level analyses is explored in the case studies below.
Analyses and applications ::: Spatial renditions of TheGuardian.com's opinion landscape
The main purpose of the observatory under discussion is to provide insight into the belief structures that characterize the opinion landscape on climate change. For reasons outlined above, this raises questions of how to represent opinions and, correspondingly, determining which representation is most suited as the atomic unit of comparison between opinions. In general terms, the desired outcome of further processing of the output of the semantic frame extractor is a network representation in which similar cause or effect strings are displayed in close proximity to one another. A high-level description of the pipeline under discussion thus goes as follows. In a first step, it can be decided whether one wants to map cause statements or effect statements. Next, the selected statements are grouped per commenter (i.e. a list is made of all cause statements or effect statements per commenter). These statements are filtered in order to retain only nouns, adjectives and verbs (thereby also omitting frequently occurring verbs such as ‘to be’). The remaining words are then lemmatized, that is, reduced to their dictionary forms. This output is finally translated into a network representation, whereby nodes represent (aggregated) statements, and edges express the semantic relatedness between statements (based on a set overlap whereby the number of shared lemmata are counted).
As illustrated by two spatial renditions that were created using this approach and visualized using the network analysis tool Gephi BIBREF26, the labels assigned to these nodes (lemmata, full statements, or other) can be appropriated to the scope of the analysis.
Analyses and applications ::: Spatial renditions of TheGuardian.com's opinion landscape ::: A macro-level overview: causes addressed in the climate change debate
Suppose one wants to get a first idea about the scope and diversity of an opinion landscape, without any preconceived notions of this landscape's structure or composition. One way of doing this would be to map all of the causes that are mentioned in comments related to articles on climate change, that is, creating an overview of all the causes that have been retrieved by the frame extractor in a single representation. Such a representation would not immediately provide the granularity to state what the beliefs or opinions in the debates actually are, but rather, it might inspire a sense of what those opinions might be about, thus pointing towards potentially interesting phenomena that might warrant closer examination.
Figure FIGREF10, a high-level overview of the opinion landscape, reveals a number of areas to which opinions and beliefs might pertain. The top-left clusters in the diagram for instance reveal opinions about the role of people and countries, whereas on the right-hand side, we find a complementary cluster that might point to beliefs concerning the influence of high or increased CO2-emissions. In between, there is a cluster on power and energy sources, reflecting the energy debate's association to both issues of human responsibility and CO2 emissions. As such, the overview can already inspire, potentially at best, some very general hypotheses about the types of opinions that figure in the climate change debate.
Analyses and applications ::: Spatial renditions of TheGuardian.com's opinion landscape ::: Micro-level investigations: opinions on nuclear power and global warming
Based on the range of topics on which beliefs are expressed, a micro-level analysis can be conducted to reveal what those beliefs are and, for instance, whether they align or contradict each other. This can be achieved by singling out a cause of interest, and mapping out its associated effects.
As revealed by the global overview of the climate change opinion landscape, a portion of the debate concerns power and energy sources. One topic with a particularly interesting role in this debate is nuclear power. Figure FIGREF12 illustrates how a more detailed representation of opinions on this matter can be created by spatially representing all of the effects associated with causes containing the expression `nuclear power'. Again, similar beliefs (in terms of words used in the effects) are positioned closer to each other, thus facilitating the detection of clusters. Commenters on The Guardian for instance express concerns about the deaths or extinction that might be caused by this energy resource. They also voice opinions on its cleanliness, whether or not it might decrease pollution or be its own source of pollution, and how it reduces CO2-emissions in different countries.
Whereas the detailed opinion landscape on `nuclear power' is relatively limited in terms of the number of mined opinions, other topics might reveal more elaborate belief systems. This is for instance the case for the phenomenon of `global warming'. As shown in Figure FIGREF13, opinions on global warming are clustered around the idea of `increases', notably in terms of evaporation, drought, heat waves, intensity of cyclones and storms, etc. An adjacent cluster is related to `extremes', such as extreme summers and weather events, but also extreme colds.
From opinion observation to debate facilitation
The observatory introduced in the preceding paragraphs provides preliminary insights into the range and scope of the beliefs that figure in climate change debates on TheGuardian.com. The observatory as such takes a distinctly descriptive stance, and aims to satisfy, at least in part, the information needs of researchers, activists, journalists and other stakeholders whose main concern is to document, investigate and understand on-line opinion dynamics. However, in the current information sphere, which is marked by polarization, misinformation and a close entanglement with real-world conflicts, taking a mere descriptive or neutral stance might not serve every stakeholder's needs. Indeed, given the often skewed relations between power and information, questions arise as to how media observations might in turn be translated into (political, social or economic) action. Knowledge about opinion dynamics might for instance inform interventions that remedy polarization or disarm conflict. In other words, the construction of (social) media observatories unavoidably lifts questions about the possibilities, limitations and, especially, implications of the machine-guided and human-incentivized facilitation of on-line discussions and debates.
Addressing these questions, the present paragraph introduces and explores the concept of a debate facilitator, that is, a device that extends the capabilities of the previously discussed observatory to also promote more interesting and constructive discussions. Concretely, we will conceptualize a device that reveals how the personal opinion landscapes of commenters relate to each other (in terms of overlap or lack thereof), and we will discuss what steps might potentially be taken on the basis of such representation to balance the debate. Geared towards possible interventions in the debate, such a device may thus go well beyond the observatory's objectives of making opinion processes and conflicts more transparent, which concomitantly raises a number of serious concerns that need to be acknowledged.
On rather fundamental ground, tools that steer debates in one way or another may easily become manipulative and dangerous instruments in the hands of certain interest groups. Various aspects of our daily lives are for instance already implicitly guided by recommender systems, the purpose and impact of which can be rather opaque. For this reason, research efforts across disciplines are directed at scrutinizing and rendering such systems more transparent BIBREF28. Such scrutiny is particularly pressing in the context of interventions on on-line communication platforms, which have already been argued to enforce affective communication styles that feed rather than resolve conflict. The objectives behind any facilitation device should therefore be made maximally transparent and potential biases should be fully acknowledged at every level, from data ingest to the dissemination of results BIBREF29. More concretely, the endeavour of constructing opinion observatories and facilitators foregrounds matters of `openness' of data and tools, security, ensuring data quality and representative sampling, accounting for evolving data legislation and policy, building communities and trust, and envisioning beneficial implications. By documenting the development process for a potential facilitation device, the present paper aims to contribute to these on-going investigations and debates. Furthermore, every effort has been made to protect the identities of the commenters involved. In the words of media and technology visionary Jaron Lanier, developers and computational social scientists entering this space should remain fundamentally aware of the fact that `digital information is really just people in disguise' BIBREF30.
With these reservations in mind, the proposed approach can be situated among ongoing efforts that lead from debate observation to facilitation. One such pathway, for instance, involves the construction of filters to detect hate speech, misinformation and other forms of expression that might render debates toxic BIBREF31, BIBREF32. Combined with community outreach, language-based filtering and detection tools have proven to raise awareness among social media users about the nature and potential implications of their on-line contributions BIBREF33. Similarly, advances can be expected from approaches that aim to extend the scope of analysis beyond descriptions of a present debate situation in order to model how a debate might evolve over time and how intentions of the participants could be included in such an analysis.
Progress in any of these areas hinges on a further integration of real-world data in the modelling process, as well as a further socio-technical and media-theoretical investigation of how activity on social media platforms and technologies correlate to real-world conflicts. The remainder of this section therefore ventures to explore how conceptual argument communication models for polarization and alignment BIBREF34 might be reconciled with real-world data, and how such models might inform debate facilitation efforts.
From opinion observation to debate facilitation ::: Debate facilitation through models of alignment and polarization
As discussed in previous sections, news websites like TheGuardian.com establish a communicative settings in which agents (users, commenters) exchange arguments about different issues or topics. For those seeking to establish a healthy debate, it could thus be of interest to know how different users relate to each other in terms of their beliefs about a certain issue or topic (in this case climate change). Which beliefs are for instance shared by users and which ones are not? In other words, can we map patterns of alignment or polarization among users?
Figure FIGREF15 ventures to demonstrate how representations of opinion landscapes (generated using the methods outlined above) can be enriched with user information to answer such questions. Specifically, the graph represents the beliefs of two among the most active commenters in the corpus. The opinions of each user are marked using a colour coding scheme: red nodes represent the beliefs of the first user, blue nodes represent the beliefs of the second user. Nodes with a green colour represent beliefs that are shared by both users.
Taking into account again the factors of aggregation that were discussed in the previous section, Figure FIGREF15 supports some preliminary observations about the relationship between the two users in terms of their beliefs. Generally, given the fact that the graph concerns the two most active commenters on the website, it can be seen that the rendered opinion landscape is quite extensive. It is also clear that the belief systems of both users are not unrelated, as nodes of all colours can be found distributed throughout the graph. This is especially the case for the right-hand top cluster and right-hand bottom cluster of the graph, where green, red, and blue nodes are mixed. Since both users are discussing on articles on climate change, a degree of affinity between opinions or beliefs is to be expected.
Upon closer examination, a number of disparities between the belief systems of the two commenters can be detected. Considering the left-hand top cluster and center of the graph, it becomes clear that exclusively the red commenter is using a selection of terms related to the economical and socio-political realm (e.g. `people', `american', `nation', `government') and industry (e.g. `fuel', `industry', `car', etc.). The blue commenter, on the other hand, exclusively engages in using a range of terms that could be deemed more technical and scientific in nature (e.g. `feedback', `property', `output', `trend', `variability', etc.). From the graph, it also follows that the blue commenter does not enter into the red commenter's `social' segments of the graph as frequently as the red commenter enters the more scientifically-oriented clusters of the graph (although in the latter cases the red commenter does not use the specific technical terminology of the blue commenter). The cluster where both beliefs mingle the most (and where overlap can be observed), is the top right cluster. This overlap is constituted by very general terms (e.g. `climate', `change', and `science'). In sum, the graph reveals that the commenters' beliefs are positioned most closely to each other on the most general aspects of the debate, whereas there is less relatedness on the social and more technical aspects of the debate. In this regard, the depicted situation seemingly evokes currently on-going debates about the role or responsibilities of the people or individuals versus that of experts when it comes to climate change BIBREF35, BIBREF36, BIBREF37.
What forms of debate facilitation, then, could be based on these observations? And what kind of collective effects can be expected? As follows from the above, beliefs expressed by the two commenters shown here (which are selected based on their active participation rather than actual engagement or dialogue with one another) are to some extent complementary, as the blue commenter, who displays a scientifically-oriented system of beliefs, does not readily engage with the social topics discussed by the red commenter. As such, the overall opinion landscape of the climate change could potentially be enriched with novel perspectives if the blue commenter was invited to engage in a debate about such topics as industry and government. Similarly, one could explore the possibility of providing explanatory tools or additional references on occasions where the debate takes a more technical turn.
However, argument-based models of collective attitude formation BIBREF38, BIBREF34 also tell us to be cautious about such potential interventions. Following the theory underlying these models, different opinion groups prevailing during different periods of a debate will activate different argumentative associations. Facilitating exchange between users with complementary arguments supporting similar opinions may enforce biased argument pools BIBREF39 and lead to increasing polarization at the collective level. In the example considered here the two commenters agree on the general topic, but the analysis suggests that they might have different opinions about the adequate direction of specific climate change action. A more fine–grained automatic detection of cognitive and evaluative associations between arguments and opinions is needed for a reliable use of models to predict what would come out of facilitating exchange between two specific users. In this regard, computational approaches to the linguistic analysis of texts such as semantic frame extraction offer productive opportunities for empirically modelling opinion dynamics. Extraction of causation frames allows one to disentangle cause-effect relations between semantic units, which provides a productive step towards mapping and measuring structures of cognitive associations. These opportunities are to be explored by future work.
Conclusion
Ongoing transitions from a print-based media ecology to on-line news and discussion platforms have put traditional forms of news production and consumption at stake. Many challenges related to how information is currently produced and consumed come to a head in news website comment sections, which harbour the potential of providing new insights into how cultural conflicts emerge and evolve. On the basis of an observatory for analyzing climate change-related comments from TheGuardian.com, this article has critically examined possibilities and limitations of the machine-assisted exploration and possible facilitation of on-line opinion dynamics and debates.
Beyond technical and modelling pathways, this examination brings into view broader methodological and epistemological aspects of the use of digital methods to capture and study the flow of on-line information and opinions. Notably, the proposed approaches lift questions of computational analysis and interpretation that can be tied to an overarching tension between `distant' and `close reading' BIBREF40. In other words, monitoring on-line opinion dynamics means embracing the challenges and associated trade-offs that come with investigating large quantities of information through computational, text-analytical means, but doing this in such a way that nuance and meaning are not lost in the process.
Establishing productive cross-overs between the level of opinions mined at scale (for instance through the lens of causation frames) and the detailed, closer looks at specific conversations, interactions and contexts depends on a series of preliminaries. One of these is the continued availability of high-quality, accessible data. As the current on-line media ecology is recovering from recent privacy-related scandals (e.g. Cambridge Analytica), such data for obvious reasons is not always easy to come by. In the same legal and ethical vein, reproducibility and transparency of models is crucial to the further development of analytical tools and methods. As the experiments discussed in this paper have revealed, a key factor in this undertaking are human faculties of interpretation. Just like the encoding schemes introduced by Axelrod and others before the wide-spread use of computational methods, present-day pipelines and tools foreground the role of human agents as the primary source of meaning attribution.
<This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 732942 (Opinion Dynamics and Cultural Conflict in European Spaces – www.Odycceus.eu).> | Yes |
5679fabeadf680e35a4f7b092d39e8638dca6b4d | 5679fabeadf680e35a4f7b092d39e8638dca6b4d_1 | Q: Does the paper report the results of previous models applied to the same tasks?
Text: Introduction ::: Background
Over the past two decades, the rise of social media and the digitization of news and discussion platforms have radically transformed how individuals and groups create, process and share news and information. As Alan Rusbridger, former-editor-in-chief of the newspaper The Guardian has it, these technologically-driven shifts in the ways people communicate, organize themselves and express their beliefs and opinions, have
empower[ed] those that were never heard, creating a a new form of politics and turning traditional news corporations inside out. It is impossible to think of Donald Trump; of Brexit; of Bernie Sanders; of Podemos; of the growth of the far right in Europe; of the spasms of hope and violent despair in the Middle East and North Africa without thinking also of the total inversion of how news is created, shared and distributed. Much of it is liberating and and inspiring. Some of it is ugly and dark. And something - the centuries-old craft of journalism - is in danger of being lost BIBREF0.
Rusbridger's observation that the present media-ecology puts traditional notions of politics, journalism, trust and truth at stake is a widely shared one BIBREF1, BIBREF2, BIBREF3. As such, it has sparked interdisciplinary investigations, diagnoses and ideas for remedies across the economical, socio-political, and technological spectrum, challenging our existing assumptions and epistemologies BIBREF4, BIBREF5. Among these lines of inquiry, particular strands of research from the computational social sciences are addressing pressing questions of how emerging technologies and digital methods might be operationalized to regain a grip on the dynamics that govern the flow of on-line news and its associated multitudes of voices, opinions and conflicts. Could the information circulating on on-line (social) news platforms for instance be mined to better understand and analyze the problems facing our contemporary society? Might such data mining and analysis help us to monitor the growing number of social conflicts and crises due to cultural differences and diverging world-views? And finally, would such an approach potentially facilitate early detection of conflicts and even ways to resolve them before they turn violent?
Answering these questions requires further advances in the study of cultural conflict based on digital media data. This includes the development of fine-grained representations of cultural conflict based on theoretically-informed text analysis, the integration of game-theoretical approaches to models of polarization and alignment, as well as the construction of accessible tools and media-monitoring observatories: platforms that foster insight into the complexities of social behaviour and opinion dynamics through automated computational analyses of (social) media data. Through an interdisciplinary approach, the present article aims to make both a practical and theoretical contribution to these aspects of the study of opinion dynamics and conflict in new media environments.
Introduction ::: Objective
The objective of the present article is to critically examine possibilities and limitations of machine-guided exploration and potential facilitation of on-line opinion dynamics on the basis of an experimental data analytics pipeline or observatory for mining and analyzing climate change-related user comments from the news website of The Guardian (TheGuardian.com). Combining insights from the social and political sciences with computational methods for the linguistic analysis of texts, this observatory provides a series of spatial (network) representations of the opinion landscapes on climate change on the basis of causation frames expressed in news website comments. This allows for the exploration of opinion spaces at different levels of detail and aggregation.
Technical and theoretical questions related to the proposed method and infrastructure for the exploration and facilitation of debates will be discussed in three sections. The first section concerns notions of how to define what constitutes a belief or opinion and how these can be mined from texts. To this end, an approach based on the automated extraction of semantic frames expressing causation is proposed. The observatory thus builds on the theoretical premise that expressions of causation such as `global warming causes rises in sea levels' can be revelatory for a person or group's underlying belief systems. Through a further technical description of the observatory's data-analytical components, section two of the paper deals with matters of spatially modelling the output of the semantic frame extractor and how this might be achieved without sacrificing nuances of meaning. The final section of the paper, then, discusses how insights gained from technologically observing opinion dynamics can inform conceptual modelling efforts and approaches to on-line opinion facilitation. As such, the paper brings into view and critically evaluates the fundamental conceptual leap from machine-guided observation to debate facilitation and intervention.
Through the case examples from The Guardian's website and the theoretical discussions explored in these sections, the paper intends to make a twofold contribution to the fields of media studies, opinion dynamics and computational social science. Firstly, the paper introduces and chains together a number of data analytics components for social media monitoring (and facilitation) that were developed in the context of the <project name anonymized for review> infrastructure project. The <project name anonymized for review> infrastructure makes the components discussed in this paper available as open web services in order to foster reproducibility and further experimentation and development <infrastructure reference URL anonymized for review>. Secondly, and supplementing these technological and methodological gains, the paper addresses a number of theoretical, epistemological and ethical questions that are raised by experimental approaches to opinion exploration and facilitation. This notably includes methodological questions on the preservation of meaning through text and data mining, as well as the role of human interpretation, responsibility and incentivisation in observing and potentially facilitating opinion dynamics.
Introduction ::: Data: the communicative setting of TheGuardian.com
In order to study on-line opinion dynamics and build the corresponding climate change opinion observatory discussed in this paper, a corpus of climate-change related news articles and news website comments was analyzed. Concretely, articles from the ‘climate change’ subsection from the news website of The Guardian dated from 2009 up to April 2019 were processed, along with up to 200 comments and associated metadata for articles where commenting was enabled at the time of publication. The choice for studying opinion dynamics using data from The Guardian is motivated by this news website's prominent position in the media landscape as well as its communicative setting, which is geared towards user engagement. Through this interaction with readers, the news platform embodies many of the recent shifts that characterize our present-day media ecology.
TheGuardian.com is generally acknowledged to be one of the UK's leading online newspapers, with 8,2 million unique visitors per month as of May 2013 BIBREF6. The website consists of a core news site, as well as a range of subsections that allow for further classification and navigation of articles. Articles related to climate change can for instance be accessed by navigating through the `News' section, over the subsection `environment', to the subsubsection `climate change' BIBREF7. All articles on the website can be read free of charge, as The Guardian relies on a business model that combines revenues from advertising, voluntary donations and paid subscriptions.
Apart from offering high-quality, independent journalism on a range of topics, a distinguishing characteristic of The Guardian is its penchant for reader involvement and engagement. Adopting to the changing media landscape and appropriating business models that fit the transition from print to on-line news media, the Guardian has transformed itself into a platform that enables forms of citizen journalism, blogging, and welcomes readers comments on news articles BIBREF0. In order for a reader to comment on articles, it is required that a user account is made, which provides a user with a unique user name and a user profile page with a stable URL. According to the website's help pages, providing users with an identity that is consistently recognized by the community fosters proper on-line community behaviour BIBREF8. Registered users can post comments on content that is open to commenting, and these comments are moderated by a dedicated moderation team according to The Guardian's community standards and participation guidelines BIBREF9. In support of digital methods and innovative approaches to journalism and data mining, The Guardian has launched an open API (application programming interface) through which developers can access different types of content BIBREF10. It should be noted that at the moment of writing this article, readers' comments are not accessible through this API. For the scientific and educational purposes of this paper, comments were thus consulted using a dedicated scraper.
Taking into account this community and technologically-driven orientation, the communicative setting of The Guardian from which opinions are to be mined and the underlying belief system revealed, is defined by articles, participating commenters and comment spheres (that is, the actual comments aggregated by user, individual article or collection of articles) (see Figure FIGREF4).
In this setting, articles (and previous comments on those articles) can be commented on by participating commenters, each of which bring to the debate his or her own opinions or belief system. What this belief system might consists of can be inferred on a number of levels, with varying degrees of precision. On the most general level, a generic description of the profile of the average reader of The Guardian can be informative. Such profiles have been compiled by market researchers with the purpose of informing advertisers about the demographic that might be reached through this news website (and other products carrying The Guardian's brand). As of the writing of this article, the audience The Guardian is presented to advertisers as a `progressive' audience:
Living in a world of unprecedented societal change, with the public narratives around politics, gender, body image, sexuality and diet all being challenged. The Guardian is committed to reflecting the progressive agenda, and reaching the crowd that uphold those values. It’s helpful that we reach over half of progressives in the UK BIBREF11.
A second, equally high-level indicator of the beliefs that might be present on the platform, are the links through which articles on climate change can be accessed. An article on climate change might for instance be consulted through the environment section of the news website, but also through the business section. Assuming that business interests might potentially be at odds with environmental concerns, it could be hypothesized that the particular comment sphere for that article consists of at least two potentially clashing frames of mind or belief systems.
However, as will be expanded upon further in this article, truly capturing opinion dynamics requires a more systemic and fine-grained approach. The present article therefore proposes a method for harvesting opinions from the actual comment texts. The presupposition is thereby that comment spheres are marked by a diversity of potentially related opinions and beliefs. Opinions might for instance be connected through the reply structure that marks the comment section of an article, but this connection might also manifest itself on a semantic level (that is, the level of meaning or the actual contents of the comments). To capture this multidimensional, interconnected nature of the comment spheres, it is proposed to represent comment spheres as networks, where the nodes represent opinions and beliefs, and edges the relationships between these beliefs (see the spatial representation of beliefs infra). The use of precision language tools to extract such beliefs and their mutual relationships, as will be explored in the following sections, can open up new pathways of model validation and creation.
Mining opinions and beliefs from texts
In traditional experimental settings, survey techniques and associated statistical models provide researchers with established methods to gauge and analyze the opinions of a population. When studying opinion landscapes through on-line social media, however, harvesting beliefs from big textual data such as news website comments and developing or appropriating models for their analysis is a non-trivial task BIBREF12, BIBREF13, BIBREF14.
In the present context, two challenges related to data-gathering and text mining need to be addressed: (1) defining what constitutes an expression of an opinion or belief, and (2) associating this definition with a pattern that might be extracted from texts. Recent scholarship in the fields of natural language processing (NLP) and argumentation mining has yielded a range of instruments and methods for the (automatic) identification of argumentative claims in texts BIBREF15, BIBREF16. Adding to these instruments and methods, the present article proposes an approach in which belief systems or opinions on climate change are accessed through expressions of causation.
Mining opinions and beliefs from texts ::: Causal mapping methods and the climate change debate
The climate change debate is often characterized by expressions of causation, that is, expressions linking a certain cause with a certain effect. Cultural or societal clashes on climate change might for instance concern diverging assessments of whether global warming is man-made or not BIBREF17. Based on such examples, it can be stated that expressions of causation are closely associated with opinions or beliefs, and that as such, these expressions can be considered a valuable indicator for the range and diversity of the opinions and beliefs that constitute the climate change debate. The observatory under discussion therefore focuses on the extraction and analysis of linguistic patterns called causation frames. As will be further demonstrated in this section, the benefit of this causation-based approach is that it offers a systemic approach to opinion dynamics that comprises different layers of meaning, notably the cognitive or social meaningfulness of patterns on account of their being expressions of causation, as well as further lexical and semantic information that might be used for analysis and comparison.
The study of expressions of causation as a method for accessing and assessing belief systems and opinions has been formalized and streamlined since the 1970s. Pioneered by political scientist Robert Axelrod and others, this causal mapping method (also referred to as `cognitive mapping') was introduced as a means of reconstructing and evaluating administrative and political decision-making processes, based on the principle that
the notion of causation is vital to the process of evaluating alternatives. Regardless of philosophical difficulties involved in the meaning of causation, people do evaluate complex policy alternatives in terms of the consequences a particular choice would cause, and ultimately of what the sum of these effects would be. Indeed, such causal analysis is built into our language, and it would be very difficult for us to think completely in other terms, even if we tried BIBREF18.
Axelrod's causal mapping method comprises a set of conventions to graphically represent networks of causes and effects (the nodes in a network) as well as the qualitative aspects of this relation (the network’s directed edges, notably assertions of whether the causal linkage is positive or negative). These causes and effects are to be extracted from relevant sources by means of a series of heuristics and an encoding scheme (it should be noted that for this task Axelrod had human readers in mind). The graphs resulting from these efforts provide a structural overview of the relations among causal assertions (and thus beliefs):
The basic elements of the proposed system are quite simple. The concepts a person uses are represented as points, and the causal links between these concepts are represented as arrows between these points. This gives a pictorial representation of the causal assertions of a person as a graph of points and arrows. This kind of representation of assertions as a graph will be called a cognitive map. The policy alternatives, all of the various causes and effects, the goals, and the ultimate utility of the decision maker can all be thought of as concept variables, and represented as points in the cognitive map. The real power of this approach appears when a cognitive map is pictured in graph form; it is then relatively easy to see how each of the concepts and causal relationships relate to each other, and to see the overall structure of the whole set of portrayed assertions BIBREF18.
In order to construct these cognitive maps based on textual information, Margaret Tucker Wrightson provides a set of reading and coding rules for extracting cause concepts, linkages (relations) and effect concepts from expressions in the English language. The assertion `Our present topic is the militarism of Germany, which is maintaining a state of tension in the Baltic Area' might for instance be encoded as follows: `the militarism of Germany' (cause concept), /+/ (a positive relationship), `maintaining a state of tension in the Baltic area' (effect concept) BIBREF19. Emphasizing the role of human interpretation, it is acknowledged that no strict set of rules can capture the entire spectrum of causal assertions:
The fact that the English language is as varied as those who use it makes the coder's task complex and difficult. No set of rules will completely solve the problems he or she might encounter. These rules, however, provide the coder with guidelines which, if conscientiously followed, will result in outcomes meeting social scientific standards of comparative validity and reliability BIBREF19.
To facilitate the task of encoders, the causal mapping method has gone through various iterations since its original inception, all the while preserving its original premises. Recent software packages have for instance been devised to support the data encoding and drawing process BIBREF20. As such, causal or cognitive mapping has become an established opinion and decision mining method within political science, business and management, and other domains. It has notably proven to be a valuable method for the study of recent societal and cultural conflicts. Thomas Homer-Dixon et al. for instance rely on cognitive-affective maps created from survey data to analyze interpretations of the housing crisis in Germany, Israeli attitudes toward the Western Wall, and moderate versus skeptical positions on climate change BIBREF21. Similarly, Duncan Shaw et al. venture to answer the question of `Why did Brexit happen?' by building causal maps of nine televised debates that were broadcast during the four weeks leading up to the Brexit referendum BIBREF22.
In order to appropriate the method of causal mapping to the study of on-line opinion dynamics, it needs to expanded from applications at the scale of human readers and relatively small corpora of archival documents and survey answers, to the realm of `big' textual data and larger quantities of information. This attuning of cognitive mapping methods to the large-scale processing of texts required for media monitoring necessarily involves a degree of automation, as will be explored in the next section.
Mining opinions and beliefs from texts ::: Automated causation tracking with the Penelope semantic frame extractor
As outlined in the previous section, causal mapping is based on the extraction of so-called cause concepts, (causal) relations, and effect concepts from texts. The complexity of each of these these concepts can range from the relatively simple (as illustrated by the easily-identifiable cause and effect relation in the example of `German militarism' cited earlier), to more complex assertions such as `The development of international cooperation in all fields across the ideological frontiers will gradually remove the hostility and fear that poison international relations', which contains two effect concepts (viz. `the hostility that poisons international relations' and `the fear that poisons international relations'). As such, this statement would have to be encoded as a double relationship BIBREF19.
The coding guidelines in BIBREF19 further reflect that extracting cause and effect concepts from texts is an operation that works on both the syntactical and semantic levels of assertions. This can be illustrated by means of the guidelines for analyzing the aforementioned causal assertion on German militarism:
1. The first step is the realization of the relationship. Does a subject affect an object? 2. Having recognized that it does, the isolation of the cause and effects concepts is the second step. As the sentence structure indicates, "the militarism of Germany" is the causal concept, because it is the initiator of the action, while the direct object clause, "a state of tension in the Baltic area," constitutes that which is somehow influenced, the effect concept BIBREF19.
In the field of computational linguistics, from which the present paper borrows part of its methods, this procedure for extracting information related to causal assertions from texts can be considered an instance of an operation called semantic frame extraction BIBREF23. A semantic frame captures a coherent part of the meaning of a sentence in a structured way. As documented in the FrameNet project BIBREF24, the Causation frame is defined as follows:
A Cause causes an Effect. Alternatively, an Actor, a participant of a (implicit) Cause, may stand in for the Cause. The entity Affected by the Causation may stand in for the overall Effect situation or event BIBREF25.
In a linguistic utterance such as a statement in a news website comment, the Causation frame can be evoked by a series of lexical units, such as `cause', `bring on', etc. In the example `If such a small earthquake CAUSES problems, just imagine a big one!', the Causation frame is triggered by the verb `causes', which therefore is called the frame evoking element. The Cause slot is filled by `a small earthquake', the Effect slot by `problems' BIBREF25.
In order to automatically mine cause and effects concepts from the corpus of comments on The Guardian, the present paper uses the Penelope semantic frame extractor: a tool that exploits the fact that semantic frames can be expressed as form-meaning mappings called constructions. Notably, frames were extracted from Guardian comments by focusing on the following lexical units (verbs, prepositions and conjunctions), listed in FrameNet as frame evoking elements of the Causation frame: Cause.v, Due to.prep, Because.c, Because of.prep, Give rise to.v, Lead to.v or Result in.v.
As illustrated by the following examples, the strings output by the semantic frame extractor adhere closely to the original utterance, preserving all of the the comments' causation frames real-world noisiness:
The output of the semantic frame extractor as such is used as the input for the ensuing pipeline components in the climate change opinion observatory. The aim of a further analysis of these frames is to find patterns in the beliefs and opinions they express. As will be discussed in the following section, which focuses on applications and cases, maintaining semantic nuances in this further analytic process foregrounds the role of models and aggregation levels.
Analyses and applications
Based on the presupposition that relations between causation frames reveal beliefs, the output of the semantic frame extractor creates various opportunities for exploring opinion landscapes and empirically validating conceptual models for opinion dynamics.
In general, any alignment of conceptual models and real-world data is an exercise in compromising, as the idealized, abstract nature of models is likely to be at odds with the messiness of the actual data. Finding such a compromise might for instance involve a reduction of the simplicity or elegance of the model, or, on the other hand, an increased aggregation (and thus reduced granularity) of the data.
Addressing this challenge, the current section reflects on questions of data modelling, aggregation and meaning by exploring, through case examples, different spatial representations of opinion landscapes mined from the TheGuardian.com's comment sphere. These spatial renditions will be understood as network visualizations in which nodes represent argumentative statements (beliefs) and edges the degree of similarity between these statements. On the most general level, then, such a representation can consists of an overview of all the causes expressed in the corpus of climate change-related Guardian comments. This type of visualization provides a birds-eye view of the entire opinion landscape as mined from the comment texts. In turn, such a general overview might elicit more fine-grained, micro-level investigations, in which a particular cause is singled out and its more specific associated effects are mapped. These macro and micro level overviews come with their own proper potential for theory building and evaluation, as well as distinct requirements for the depth or detail of meaning that needs to be represented. To get the most general sense of an opinion landscape one might for instance be more tolerant of abstract renditions of beliefs (e.g. by reducing statements to their most frequently used terms), but for more fine-grained analysis one requires more context and nuance (e.g. adhering as closely as possible to the original comment).
Analyses and applications ::: Aggregation
As follows from the above, one of the most fundamental questions when building automated tools to observe opinion dynamics that potentially aim at advising means of debate facilitation concerns the level of meaning aggregation. A clear argumentative or causal association between, for instance, climate change and catastrophic events such as floods or hurricanes may become detectable by automatic causal frame tracking at the scale of large collections of articles where this association might appear statistically more often, but detection comes with great challenges when the aim is to classify certain sets of only a few statements in more free expression environments such as comment spheres.
In other words, the problem of meaning aggregation is closely related to issues of scale and aggregation over utterances. The more fine-grained the semantic resolution is, that is, the more specific the cause or effect is that one is interested in, the less probable it is to observe the same statement twice. Moreover, with every independent variable (such as time, different commenters or user groups, etc.), less data on which fine-grained opinion statements are to be detected is available. In the present case of parsed comments from TheGuardian.com, providing insights into the belief system of individual commenters, even if all their statements are aggregated over time, relies on a relatively small set of argumentative statements. This relative sparseness is in part due to the fact that the scope of the semantic frame extractor is confined to the frame evoking elements listed earlier, thus omitting more implicit assertions of causation (i.e. expressions of causation that can only be derived from context and from reading between the lines).
Similarly, as will be explored in the ensuing paragraphs, matters of scale and aggregation determine the types of further linguistic analyses that can be performed on the output of the frame extractor. Within the field of computational linguistics, various techniques have been developed to represent the meaning of words as vectors that capture the contexts in which these words are typically used. Such analyses might reveal patterns of statistical significance, but it is also likely that in creating novel, numerical representations of the original utterances, the semantic structure of argumentatively linked beliefs is lost.
In sum, developing opinion observatories and (potential) debate facilitators entails finding a trade-off, or, in fact, a middle way between macro- and micro-level analyses. On the one hand, one needs to leverage automated analysis methods at the scale of larger collections to maximum advantage. But one also needs to integrate opportunities to interactively zoom into specific aspects of interest and provide more fine-grained information at these levels down to the actual statements. This interplay between macro- and micro-level analyses is explored in the case studies below.
Analyses and applications ::: Spatial renditions of TheGuardian.com's opinion landscape
The main purpose of the observatory under discussion is to provide insight into the belief structures that characterize the opinion landscape on climate change. For reasons outlined above, this raises questions of how to represent opinions and, correspondingly, determining which representation is most suited as the atomic unit of comparison between opinions. In general terms, the desired outcome of further processing of the output of the semantic frame extractor is a network representation in which similar cause or effect strings are displayed in close proximity to one another. A high-level description of the pipeline under discussion thus goes as follows. In a first step, it can be decided whether one wants to map cause statements or effect statements. Next, the selected statements are grouped per commenter (i.e. a list is made of all cause statements or effect statements per commenter). These statements are filtered in order to retain only nouns, adjectives and verbs (thereby also omitting frequently occurring verbs such as ‘to be’). The remaining words are then lemmatized, that is, reduced to their dictionary forms. This output is finally translated into a network representation, whereby nodes represent (aggregated) statements, and edges express the semantic relatedness between statements (based on a set overlap whereby the number of shared lemmata are counted).
As illustrated by two spatial renditions that were created using this approach and visualized using the network analysis tool Gephi BIBREF26, the labels assigned to these nodes (lemmata, full statements, or other) can be appropriated to the scope of the analysis.
Analyses and applications ::: Spatial renditions of TheGuardian.com's opinion landscape ::: A macro-level overview: causes addressed in the climate change debate
Suppose one wants to get a first idea about the scope and diversity of an opinion landscape, without any preconceived notions of this landscape's structure or composition. One way of doing this would be to map all of the causes that are mentioned in comments related to articles on climate change, that is, creating an overview of all the causes that have been retrieved by the frame extractor in a single representation. Such a representation would not immediately provide the granularity to state what the beliefs or opinions in the debates actually are, but rather, it might inspire a sense of what those opinions might be about, thus pointing towards potentially interesting phenomena that might warrant closer examination.
Figure FIGREF10, a high-level overview of the opinion landscape, reveals a number of areas to which opinions and beliefs might pertain. The top-left clusters in the diagram for instance reveal opinions about the role of people and countries, whereas on the right-hand side, we find a complementary cluster that might point to beliefs concerning the influence of high or increased CO2-emissions. In between, there is a cluster on power and energy sources, reflecting the energy debate's association to both issues of human responsibility and CO2 emissions. As such, the overview can already inspire, potentially at best, some very general hypotheses about the types of opinions that figure in the climate change debate.
Analyses and applications ::: Spatial renditions of TheGuardian.com's opinion landscape ::: Micro-level investigations: opinions on nuclear power and global warming
Based on the range of topics on which beliefs are expressed, a micro-level analysis can be conducted to reveal what those beliefs are and, for instance, whether they align or contradict each other. This can be achieved by singling out a cause of interest, and mapping out its associated effects.
As revealed by the global overview of the climate change opinion landscape, a portion of the debate concerns power and energy sources. One topic with a particularly interesting role in this debate is nuclear power. Figure FIGREF12 illustrates how a more detailed representation of opinions on this matter can be created by spatially representing all of the effects associated with causes containing the expression `nuclear power'. Again, similar beliefs (in terms of words used in the effects) are positioned closer to each other, thus facilitating the detection of clusters. Commenters on The Guardian for instance express concerns about the deaths or extinction that might be caused by this energy resource. They also voice opinions on its cleanliness, whether or not it might decrease pollution or be its own source of pollution, and how it reduces CO2-emissions in different countries.
Whereas the detailed opinion landscape on `nuclear power' is relatively limited in terms of the number of mined opinions, other topics might reveal more elaborate belief systems. This is for instance the case for the phenomenon of `global warming'. As shown in Figure FIGREF13, opinions on global warming are clustered around the idea of `increases', notably in terms of evaporation, drought, heat waves, intensity of cyclones and storms, etc. An adjacent cluster is related to `extremes', such as extreme summers and weather events, but also extreme colds.
From opinion observation to debate facilitation
The observatory introduced in the preceding paragraphs provides preliminary insights into the range and scope of the beliefs that figure in climate change debates on TheGuardian.com. The observatory as such takes a distinctly descriptive stance, and aims to satisfy, at least in part, the information needs of researchers, activists, journalists and other stakeholders whose main concern is to document, investigate and understand on-line opinion dynamics. However, in the current information sphere, which is marked by polarization, misinformation and a close entanglement with real-world conflicts, taking a mere descriptive or neutral stance might not serve every stakeholder's needs. Indeed, given the often skewed relations between power and information, questions arise as to how media observations might in turn be translated into (political, social or economic) action. Knowledge about opinion dynamics might for instance inform interventions that remedy polarization or disarm conflict. In other words, the construction of (social) media observatories unavoidably lifts questions about the possibilities, limitations and, especially, implications of the machine-guided and human-incentivized facilitation of on-line discussions and debates.
Addressing these questions, the present paragraph introduces and explores the concept of a debate facilitator, that is, a device that extends the capabilities of the previously discussed observatory to also promote more interesting and constructive discussions. Concretely, we will conceptualize a device that reveals how the personal opinion landscapes of commenters relate to each other (in terms of overlap or lack thereof), and we will discuss what steps might potentially be taken on the basis of such representation to balance the debate. Geared towards possible interventions in the debate, such a device may thus go well beyond the observatory's objectives of making opinion processes and conflicts more transparent, which concomitantly raises a number of serious concerns that need to be acknowledged.
On rather fundamental ground, tools that steer debates in one way or another may easily become manipulative and dangerous instruments in the hands of certain interest groups. Various aspects of our daily lives are for instance already implicitly guided by recommender systems, the purpose and impact of which can be rather opaque. For this reason, research efforts across disciplines are directed at scrutinizing and rendering such systems more transparent BIBREF28. Such scrutiny is particularly pressing in the context of interventions on on-line communication platforms, which have already been argued to enforce affective communication styles that feed rather than resolve conflict. The objectives behind any facilitation device should therefore be made maximally transparent and potential biases should be fully acknowledged at every level, from data ingest to the dissemination of results BIBREF29. More concretely, the endeavour of constructing opinion observatories and facilitators foregrounds matters of `openness' of data and tools, security, ensuring data quality and representative sampling, accounting for evolving data legislation and policy, building communities and trust, and envisioning beneficial implications. By documenting the development process for a potential facilitation device, the present paper aims to contribute to these on-going investigations and debates. Furthermore, every effort has been made to protect the identities of the commenters involved. In the words of media and technology visionary Jaron Lanier, developers and computational social scientists entering this space should remain fundamentally aware of the fact that `digital information is really just people in disguise' BIBREF30.
With these reservations in mind, the proposed approach can be situated among ongoing efforts that lead from debate observation to facilitation. One such pathway, for instance, involves the construction of filters to detect hate speech, misinformation and other forms of expression that might render debates toxic BIBREF31, BIBREF32. Combined with community outreach, language-based filtering and detection tools have proven to raise awareness among social media users about the nature and potential implications of their on-line contributions BIBREF33. Similarly, advances can be expected from approaches that aim to extend the scope of analysis beyond descriptions of a present debate situation in order to model how a debate might evolve over time and how intentions of the participants could be included in such an analysis.
Progress in any of these areas hinges on a further integration of real-world data in the modelling process, as well as a further socio-technical and media-theoretical investigation of how activity on social media platforms and technologies correlate to real-world conflicts. The remainder of this section therefore ventures to explore how conceptual argument communication models for polarization and alignment BIBREF34 might be reconciled with real-world data, and how such models might inform debate facilitation efforts.
From opinion observation to debate facilitation ::: Debate facilitation through models of alignment and polarization
As discussed in previous sections, news websites like TheGuardian.com establish a communicative settings in which agents (users, commenters) exchange arguments about different issues or topics. For those seeking to establish a healthy debate, it could thus be of interest to know how different users relate to each other in terms of their beliefs about a certain issue or topic (in this case climate change). Which beliefs are for instance shared by users and which ones are not? In other words, can we map patterns of alignment or polarization among users?
Figure FIGREF15 ventures to demonstrate how representations of opinion landscapes (generated using the methods outlined above) can be enriched with user information to answer such questions. Specifically, the graph represents the beliefs of two among the most active commenters in the corpus. The opinions of each user are marked using a colour coding scheme: red nodes represent the beliefs of the first user, blue nodes represent the beliefs of the second user. Nodes with a green colour represent beliefs that are shared by both users.
Taking into account again the factors of aggregation that were discussed in the previous section, Figure FIGREF15 supports some preliminary observations about the relationship between the two users in terms of their beliefs. Generally, given the fact that the graph concerns the two most active commenters on the website, it can be seen that the rendered opinion landscape is quite extensive. It is also clear that the belief systems of both users are not unrelated, as nodes of all colours can be found distributed throughout the graph. This is especially the case for the right-hand top cluster and right-hand bottom cluster of the graph, where green, red, and blue nodes are mixed. Since both users are discussing on articles on climate change, a degree of affinity between opinions or beliefs is to be expected.
Upon closer examination, a number of disparities between the belief systems of the two commenters can be detected. Considering the left-hand top cluster and center of the graph, it becomes clear that exclusively the red commenter is using a selection of terms related to the economical and socio-political realm (e.g. `people', `american', `nation', `government') and industry (e.g. `fuel', `industry', `car', etc.). The blue commenter, on the other hand, exclusively engages in using a range of terms that could be deemed more technical and scientific in nature (e.g. `feedback', `property', `output', `trend', `variability', etc.). From the graph, it also follows that the blue commenter does not enter into the red commenter's `social' segments of the graph as frequently as the red commenter enters the more scientifically-oriented clusters of the graph (although in the latter cases the red commenter does not use the specific technical terminology of the blue commenter). The cluster where both beliefs mingle the most (and where overlap can be observed), is the top right cluster. This overlap is constituted by very general terms (e.g. `climate', `change', and `science'). In sum, the graph reveals that the commenters' beliefs are positioned most closely to each other on the most general aspects of the debate, whereas there is less relatedness on the social and more technical aspects of the debate. In this regard, the depicted situation seemingly evokes currently on-going debates about the role or responsibilities of the people or individuals versus that of experts when it comes to climate change BIBREF35, BIBREF36, BIBREF37.
What forms of debate facilitation, then, could be based on these observations? And what kind of collective effects can be expected? As follows from the above, beliefs expressed by the two commenters shown here (which are selected based on their active participation rather than actual engagement or dialogue with one another) are to some extent complementary, as the blue commenter, who displays a scientifically-oriented system of beliefs, does not readily engage with the social topics discussed by the red commenter. As such, the overall opinion landscape of the climate change could potentially be enriched with novel perspectives if the blue commenter was invited to engage in a debate about such topics as industry and government. Similarly, one could explore the possibility of providing explanatory tools or additional references on occasions where the debate takes a more technical turn.
However, argument-based models of collective attitude formation BIBREF38, BIBREF34 also tell us to be cautious about such potential interventions. Following the theory underlying these models, different opinion groups prevailing during different periods of a debate will activate different argumentative associations. Facilitating exchange between users with complementary arguments supporting similar opinions may enforce biased argument pools BIBREF39 and lead to increasing polarization at the collective level. In the example considered here the two commenters agree on the general topic, but the analysis suggests that they might have different opinions about the adequate direction of specific climate change action. A more fine–grained automatic detection of cognitive and evaluative associations between arguments and opinions is needed for a reliable use of models to predict what would come out of facilitating exchange between two specific users. In this regard, computational approaches to the linguistic analysis of texts such as semantic frame extraction offer productive opportunities for empirically modelling opinion dynamics. Extraction of causation frames allows one to disentangle cause-effect relations between semantic units, which provides a productive step towards mapping and measuring structures of cognitive associations. These opportunities are to be explored by future work.
Conclusion
Ongoing transitions from a print-based media ecology to on-line news and discussion platforms have put traditional forms of news production and consumption at stake. Many challenges related to how information is currently produced and consumed come to a head in news website comment sections, which harbour the potential of providing new insights into how cultural conflicts emerge and evolve. On the basis of an observatory for analyzing climate change-related comments from TheGuardian.com, this article has critically examined possibilities and limitations of the machine-assisted exploration and possible facilitation of on-line opinion dynamics and debates.
Beyond technical and modelling pathways, this examination brings into view broader methodological and epistemological aspects of the use of digital methods to capture and study the flow of on-line information and opinions. Notably, the proposed approaches lift questions of computational analysis and interpretation that can be tied to an overarching tension between `distant' and `close reading' BIBREF40. In other words, monitoring on-line opinion dynamics means embracing the challenges and associated trade-offs that come with investigating large quantities of information through computational, text-analytical means, but doing this in such a way that nuance and meaning are not lost in the process.
Establishing productive cross-overs between the level of opinions mined at scale (for instance through the lens of causation frames) and the detailed, closer looks at specific conversations, interactions and contexts depends on a series of preliminaries. One of these is the continued availability of high-quality, accessible data. As the current on-line media ecology is recovering from recent privacy-related scandals (e.g. Cambridge Analytica), such data for obvious reasons is not always easy to come by. In the same legal and ethical vein, reproducibility and transparency of models is crucial to the further development of analytical tools and methods. As the experiments discussed in this paper have revealed, a key factor in this undertaking are human faculties of interpretation. Just like the encoding schemes introduced by Axelrod and others before the wide-spread use of computational methods, present-day pipelines and tools foreground the role of human agents as the primary source of meaning attribution.
<This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 732942 (Opinion Dynamics and Cultural Conflict in European Spaces – www.Odycceus.eu).> | No |
a939a53cabb4893b2fd82996f3dbe8688fdb7bbb | a939a53cabb4893b2fd82996f3dbe8688fdb7bbb_0 | Q: How is the quality of the discussion evaluated?
Text: Introduction ::: Background
Over the past two decades, the rise of social media and the digitization of news and discussion platforms have radically transformed how individuals and groups create, process and share news and information. As Alan Rusbridger, former-editor-in-chief of the newspaper The Guardian has it, these technologically-driven shifts in the ways people communicate, organize themselves and express their beliefs and opinions, have
empower[ed] those that were never heard, creating a a new form of politics and turning traditional news corporations inside out. It is impossible to think of Donald Trump; of Brexit; of Bernie Sanders; of Podemos; of the growth of the far right in Europe; of the spasms of hope and violent despair in the Middle East and North Africa without thinking also of the total inversion of how news is created, shared and distributed. Much of it is liberating and and inspiring. Some of it is ugly and dark. And something - the centuries-old craft of journalism - is in danger of being lost BIBREF0.
Rusbridger's observation that the present media-ecology puts traditional notions of politics, journalism, trust and truth at stake is a widely shared one BIBREF1, BIBREF2, BIBREF3. As such, it has sparked interdisciplinary investigations, diagnoses and ideas for remedies across the economical, socio-political, and technological spectrum, challenging our existing assumptions and epistemologies BIBREF4, BIBREF5. Among these lines of inquiry, particular strands of research from the computational social sciences are addressing pressing questions of how emerging technologies and digital methods might be operationalized to regain a grip on the dynamics that govern the flow of on-line news and its associated multitudes of voices, opinions and conflicts. Could the information circulating on on-line (social) news platforms for instance be mined to better understand and analyze the problems facing our contemporary society? Might such data mining and analysis help us to monitor the growing number of social conflicts and crises due to cultural differences and diverging world-views? And finally, would such an approach potentially facilitate early detection of conflicts and even ways to resolve them before they turn violent?
Answering these questions requires further advances in the study of cultural conflict based on digital media data. This includes the development of fine-grained representations of cultural conflict based on theoretically-informed text analysis, the integration of game-theoretical approaches to models of polarization and alignment, as well as the construction of accessible tools and media-monitoring observatories: platforms that foster insight into the complexities of social behaviour and opinion dynamics through automated computational analyses of (social) media data. Through an interdisciplinary approach, the present article aims to make both a practical and theoretical contribution to these aspects of the study of opinion dynamics and conflict in new media environments.
Introduction ::: Objective
The objective of the present article is to critically examine possibilities and limitations of machine-guided exploration and potential facilitation of on-line opinion dynamics on the basis of an experimental data analytics pipeline or observatory for mining and analyzing climate change-related user comments from the news website of The Guardian (TheGuardian.com). Combining insights from the social and political sciences with computational methods for the linguistic analysis of texts, this observatory provides a series of spatial (network) representations of the opinion landscapes on climate change on the basis of causation frames expressed in news website comments. This allows for the exploration of opinion spaces at different levels of detail and aggregation.
Technical and theoretical questions related to the proposed method and infrastructure for the exploration and facilitation of debates will be discussed in three sections. The first section concerns notions of how to define what constitutes a belief or opinion and how these can be mined from texts. To this end, an approach based on the automated extraction of semantic frames expressing causation is proposed. The observatory thus builds on the theoretical premise that expressions of causation such as `global warming causes rises in sea levels' can be revelatory for a person or group's underlying belief systems. Through a further technical description of the observatory's data-analytical components, section two of the paper deals with matters of spatially modelling the output of the semantic frame extractor and how this might be achieved without sacrificing nuances of meaning. The final section of the paper, then, discusses how insights gained from technologically observing opinion dynamics can inform conceptual modelling efforts and approaches to on-line opinion facilitation. As such, the paper brings into view and critically evaluates the fundamental conceptual leap from machine-guided observation to debate facilitation and intervention.
Through the case examples from The Guardian's website and the theoretical discussions explored in these sections, the paper intends to make a twofold contribution to the fields of media studies, opinion dynamics and computational social science. Firstly, the paper introduces and chains together a number of data analytics components for social media monitoring (and facilitation) that were developed in the context of the <project name anonymized for review> infrastructure project. The <project name anonymized for review> infrastructure makes the components discussed in this paper available as open web services in order to foster reproducibility and further experimentation and development <infrastructure reference URL anonymized for review>. Secondly, and supplementing these technological and methodological gains, the paper addresses a number of theoretical, epistemological and ethical questions that are raised by experimental approaches to opinion exploration and facilitation. This notably includes methodological questions on the preservation of meaning through text and data mining, as well as the role of human interpretation, responsibility and incentivisation in observing and potentially facilitating opinion dynamics.
Introduction ::: Data: the communicative setting of TheGuardian.com
In order to study on-line opinion dynamics and build the corresponding climate change opinion observatory discussed in this paper, a corpus of climate-change related news articles and news website comments was analyzed. Concretely, articles from the ‘climate change’ subsection from the news website of The Guardian dated from 2009 up to April 2019 were processed, along with up to 200 comments and associated metadata for articles where commenting was enabled at the time of publication. The choice for studying opinion dynamics using data from The Guardian is motivated by this news website's prominent position in the media landscape as well as its communicative setting, which is geared towards user engagement. Through this interaction with readers, the news platform embodies many of the recent shifts that characterize our present-day media ecology.
TheGuardian.com is generally acknowledged to be one of the UK's leading online newspapers, with 8,2 million unique visitors per month as of May 2013 BIBREF6. The website consists of a core news site, as well as a range of subsections that allow for further classification and navigation of articles. Articles related to climate change can for instance be accessed by navigating through the `News' section, over the subsection `environment', to the subsubsection `climate change' BIBREF7. All articles on the website can be read free of charge, as The Guardian relies on a business model that combines revenues from advertising, voluntary donations and paid subscriptions.
Apart from offering high-quality, independent journalism on a range of topics, a distinguishing characteristic of The Guardian is its penchant for reader involvement and engagement. Adopting to the changing media landscape and appropriating business models that fit the transition from print to on-line news media, the Guardian has transformed itself into a platform that enables forms of citizen journalism, blogging, and welcomes readers comments on news articles BIBREF0. In order for a reader to comment on articles, it is required that a user account is made, which provides a user with a unique user name and a user profile page with a stable URL. According to the website's help pages, providing users with an identity that is consistently recognized by the community fosters proper on-line community behaviour BIBREF8. Registered users can post comments on content that is open to commenting, and these comments are moderated by a dedicated moderation team according to The Guardian's community standards and participation guidelines BIBREF9. In support of digital methods and innovative approaches to journalism and data mining, The Guardian has launched an open API (application programming interface) through which developers can access different types of content BIBREF10. It should be noted that at the moment of writing this article, readers' comments are not accessible through this API. For the scientific and educational purposes of this paper, comments were thus consulted using a dedicated scraper.
Taking into account this community and technologically-driven orientation, the communicative setting of The Guardian from which opinions are to be mined and the underlying belief system revealed, is defined by articles, participating commenters and comment spheres (that is, the actual comments aggregated by user, individual article or collection of articles) (see Figure FIGREF4).
In this setting, articles (and previous comments on those articles) can be commented on by participating commenters, each of which bring to the debate his or her own opinions or belief system. What this belief system might consists of can be inferred on a number of levels, with varying degrees of precision. On the most general level, a generic description of the profile of the average reader of The Guardian can be informative. Such profiles have been compiled by market researchers with the purpose of informing advertisers about the demographic that might be reached through this news website (and other products carrying The Guardian's brand). As of the writing of this article, the audience The Guardian is presented to advertisers as a `progressive' audience:
Living in a world of unprecedented societal change, with the public narratives around politics, gender, body image, sexuality and diet all being challenged. The Guardian is committed to reflecting the progressive agenda, and reaching the crowd that uphold those values. It’s helpful that we reach over half of progressives in the UK BIBREF11.
A second, equally high-level indicator of the beliefs that might be present on the platform, are the links through which articles on climate change can be accessed. An article on climate change might for instance be consulted through the environment section of the news website, but also through the business section. Assuming that business interests might potentially be at odds with environmental concerns, it could be hypothesized that the particular comment sphere for that article consists of at least two potentially clashing frames of mind or belief systems.
However, as will be expanded upon further in this article, truly capturing opinion dynamics requires a more systemic and fine-grained approach. The present article therefore proposes a method for harvesting opinions from the actual comment texts. The presupposition is thereby that comment spheres are marked by a diversity of potentially related opinions and beliefs. Opinions might for instance be connected through the reply structure that marks the comment section of an article, but this connection might also manifest itself on a semantic level (that is, the level of meaning or the actual contents of the comments). To capture this multidimensional, interconnected nature of the comment spheres, it is proposed to represent comment spheres as networks, where the nodes represent opinions and beliefs, and edges the relationships between these beliefs (see the spatial representation of beliefs infra). The use of precision language tools to extract such beliefs and their mutual relationships, as will be explored in the following sections, can open up new pathways of model validation and creation.
Mining opinions and beliefs from texts
In traditional experimental settings, survey techniques and associated statistical models provide researchers with established methods to gauge and analyze the opinions of a population. When studying opinion landscapes through on-line social media, however, harvesting beliefs from big textual data such as news website comments and developing or appropriating models for their analysis is a non-trivial task BIBREF12, BIBREF13, BIBREF14.
In the present context, two challenges related to data-gathering and text mining need to be addressed: (1) defining what constitutes an expression of an opinion or belief, and (2) associating this definition with a pattern that might be extracted from texts. Recent scholarship in the fields of natural language processing (NLP) and argumentation mining has yielded a range of instruments and methods for the (automatic) identification of argumentative claims in texts BIBREF15, BIBREF16. Adding to these instruments and methods, the present article proposes an approach in which belief systems or opinions on climate change are accessed through expressions of causation.
Mining opinions and beliefs from texts ::: Causal mapping methods and the climate change debate
The climate change debate is often characterized by expressions of causation, that is, expressions linking a certain cause with a certain effect. Cultural or societal clashes on climate change might for instance concern diverging assessments of whether global warming is man-made or not BIBREF17. Based on such examples, it can be stated that expressions of causation are closely associated with opinions or beliefs, and that as such, these expressions can be considered a valuable indicator for the range and diversity of the opinions and beliefs that constitute the climate change debate. The observatory under discussion therefore focuses on the extraction and analysis of linguistic patterns called causation frames. As will be further demonstrated in this section, the benefit of this causation-based approach is that it offers a systemic approach to opinion dynamics that comprises different layers of meaning, notably the cognitive or social meaningfulness of patterns on account of their being expressions of causation, as well as further lexical and semantic information that might be used for analysis and comparison.
The study of expressions of causation as a method for accessing and assessing belief systems and opinions has been formalized and streamlined since the 1970s. Pioneered by political scientist Robert Axelrod and others, this causal mapping method (also referred to as `cognitive mapping') was introduced as a means of reconstructing and evaluating administrative and political decision-making processes, based on the principle that
the notion of causation is vital to the process of evaluating alternatives. Regardless of philosophical difficulties involved in the meaning of causation, people do evaluate complex policy alternatives in terms of the consequences a particular choice would cause, and ultimately of what the sum of these effects would be. Indeed, such causal analysis is built into our language, and it would be very difficult for us to think completely in other terms, even if we tried BIBREF18.
Axelrod's causal mapping method comprises a set of conventions to graphically represent networks of causes and effects (the nodes in a network) as well as the qualitative aspects of this relation (the network’s directed edges, notably assertions of whether the causal linkage is positive or negative). These causes and effects are to be extracted from relevant sources by means of a series of heuristics and an encoding scheme (it should be noted that for this task Axelrod had human readers in mind). The graphs resulting from these efforts provide a structural overview of the relations among causal assertions (and thus beliefs):
The basic elements of the proposed system are quite simple. The concepts a person uses are represented as points, and the causal links between these concepts are represented as arrows between these points. This gives a pictorial representation of the causal assertions of a person as a graph of points and arrows. This kind of representation of assertions as a graph will be called a cognitive map. The policy alternatives, all of the various causes and effects, the goals, and the ultimate utility of the decision maker can all be thought of as concept variables, and represented as points in the cognitive map. The real power of this approach appears when a cognitive map is pictured in graph form; it is then relatively easy to see how each of the concepts and causal relationships relate to each other, and to see the overall structure of the whole set of portrayed assertions BIBREF18.
In order to construct these cognitive maps based on textual information, Margaret Tucker Wrightson provides a set of reading and coding rules for extracting cause concepts, linkages (relations) and effect concepts from expressions in the English language. The assertion `Our present topic is the militarism of Germany, which is maintaining a state of tension in the Baltic Area' might for instance be encoded as follows: `the militarism of Germany' (cause concept), /+/ (a positive relationship), `maintaining a state of tension in the Baltic area' (effect concept) BIBREF19. Emphasizing the role of human interpretation, it is acknowledged that no strict set of rules can capture the entire spectrum of causal assertions:
The fact that the English language is as varied as those who use it makes the coder's task complex and difficult. No set of rules will completely solve the problems he or she might encounter. These rules, however, provide the coder with guidelines which, if conscientiously followed, will result in outcomes meeting social scientific standards of comparative validity and reliability BIBREF19.
To facilitate the task of encoders, the causal mapping method has gone through various iterations since its original inception, all the while preserving its original premises. Recent software packages have for instance been devised to support the data encoding and drawing process BIBREF20. As such, causal or cognitive mapping has become an established opinion and decision mining method within political science, business and management, and other domains. It has notably proven to be a valuable method for the study of recent societal and cultural conflicts. Thomas Homer-Dixon et al. for instance rely on cognitive-affective maps created from survey data to analyze interpretations of the housing crisis in Germany, Israeli attitudes toward the Western Wall, and moderate versus skeptical positions on climate change BIBREF21. Similarly, Duncan Shaw et al. venture to answer the question of `Why did Brexit happen?' by building causal maps of nine televised debates that were broadcast during the four weeks leading up to the Brexit referendum BIBREF22.
In order to appropriate the method of causal mapping to the study of on-line opinion dynamics, it needs to expanded from applications at the scale of human readers and relatively small corpora of archival documents and survey answers, to the realm of `big' textual data and larger quantities of information. This attuning of cognitive mapping methods to the large-scale processing of texts required for media monitoring necessarily involves a degree of automation, as will be explored in the next section.
Mining opinions and beliefs from texts ::: Automated causation tracking with the Penelope semantic frame extractor
As outlined in the previous section, causal mapping is based on the extraction of so-called cause concepts, (causal) relations, and effect concepts from texts. The complexity of each of these these concepts can range from the relatively simple (as illustrated by the easily-identifiable cause and effect relation in the example of `German militarism' cited earlier), to more complex assertions such as `The development of international cooperation in all fields across the ideological frontiers will gradually remove the hostility and fear that poison international relations', which contains two effect concepts (viz. `the hostility that poisons international relations' and `the fear that poisons international relations'). As such, this statement would have to be encoded as a double relationship BIBREF19.
The coding guidelines in BIBREF19 further reflect that extracting cause and effect concepts from texts is an operation that works on both the syntactical and semantic levels of assertions. This can be illustrated by means of the guidelines for analyzing the aforementioned causal assertion on German militarism:
1. The first step is the realization of the relationship. Does a subject affect an object? 2. Having recognized that it does, the isolation of the cause and effects concepts is the second step. As the sentence structure indicates, "the militarism of Germany" is the causal concept, because it is the initiator of the action, while the direct object clause, "a state of tension in the Baltic area," constitutes that which is somehow influenced, the effect concept BIBREF19.
In the field of computational linguistics, from which the present paper borrows part of its methods, this procedure for extracting information related to causal assertions from texts can be considered an instance of an operation called semantic frame extraction BIBREF23. A semantic frame captures a coherent part of the meaning of a sentence in a structured way. As documented in the FrameNet project BIBREF24, the Causation frame is defined as follows:
A Cause causes an Effect. Alternatively, an Actor, a participant of a (implicit) Cause, may stand in for the Cause. The entity Affected by the Causation may stand in for the overall Effect situation or event BIBREF25.
In a linguistic utterance such as a statement in a news website comment, the Causation frame can be evoked by a series of lexical units, such as `cause', `bring on', etc. In the example `If such a small earthquake CAUSES problems, just imagine a big one!', the Causation frame is triggered by the verb `causes', which therefore is called the frame evoking element. The Cause slot is filled by `a small earthquake', the Effect slot by `problems' BIBREF25.
In order to automatically mine cause and effects concepts from the corpus of comments on The Guardian, the present paper uses the Penelope semantic frame extractor: a tool that exploits the fact that semantic frames can be expressed as form-meaning mappings called constructions. Notably, frames were extracted from Guardian comments by focusing on the following lexical units (verbs, prepositions and conjunctions), listed in FrameNet as frame evoking elements of the Causation frame: Cause.v, Due to.prep, Because.c, Because of.prep, Give rise to.v, Lead to.v or Result in.v.
As illustrated by the following examples, the strings output by the semantic frame extractor adhere closely to the original utterance, preserving all of the the comments' causation frames real-world noisiness:
The output of the semantic frame extractor as such is used as the input for the ensuing pipeline components in the climate change opinion observatory. The aim of a further analysis of these frames is to find patterns in the beliefs and opinions they express. As will be discussed in the following section, which focuses on applications and cases, maintaining semantic nuances in this further analytic process foregrounds the role of models and aggregation levels.
Analyses and applications
Based on the presupposition that relations between causation frames reveal beliefs, the output of the semantic frame extractor creates various opportunities for exploring opinion landscapes and empirically validating conceptual models for opinion dynamics.
In general, any alignment of conceptual models and real-world data is an exercise in compromising, as the idealized, abstract nature of models is likely to be at odds with the messiness of the actual data. Finding such a compromise might for instance involve a reduction of the simplicity or elegance of the model, or, on the other hand, an increased aggregation (and thus reduced granularity) of the data.
Addressing this challenge, the current section reflects on questions of data modelling, aggregation and meaning by exploring, through case examples, different spatial representations of opinion landscapes mined from the TheGuardian.com's comment sphere. These spatial renditions will be understood as network visualizations in which nodes represent argumentative statements (beliefs) and edges the degree of similarity between these statements. On the most general level, then, such a representation can consists of an overview of all the causes expressed in the corpus of climate change-related Guardian comments. This type of visualization provides a birds-eye view of the entire opinion landscape as mined from the comment texts. In turn, such a general overview might elicit more fine-grained, micro-level investigations, in which a particular cause is singled out and its more specific associated effects are mapped. These macro and micro level overviews come with their own proper potential for theory building and evaluation, as well as distinct requirements for the depth or detail of meaning that needs to be represented. To get the most general sense of an opinion landscape one might for instance be more tolerant of abstract renditions of beliefs (e.g. by reducing statements to their most frequently used terms), but for more fine-grained analysis one requires more context and nuance (e.g. adhering as closely as possible to the original comment).
Analyses and applications ::: Aggregation
As follows from the above, one of the most fundamental questions when building automated tools to observe opinion dynamics that potentially aim at advising means of debate facilitation concerns the level of meaning aggregation. A clear argumentative or causal association between, for instance, climate change and catastrophic events such as floods or hurricanes may become detectable by automatic causal frame tracking at the scale of large collections of articles where this association might appear statistically more often, but detection comes with great challenges when the aim is to classify certain sets of only a few statements in more free expression environments such as comment spheres.
In other words, the problem of meaning aggregation is closely related to issues of scale and aggregation over utterances. The more fine-grained the semantic resolution is, that is, the more specific the cause or effect is that one is interested in, the less probable it is to observe the same statement twice. Moreover, with every independent variable (such as time, different commenters or user groups, etc.), less data on which fine-grained opinion statements are to be detected is available. In the present case of parsed comments from TheGuardian.com, providing insights into the belief system of individual commenters, even if all their statements are aggregated over time, relies on a relatively small set of argumentative statements. This relative sparseness is in part due to the fact that the scope of the semantic frame extractor is confined to the frame evoking elements listed earlier, thus omitting more implicit assertions of causation (i.e. expressions of causation that can only be derived from context and from reading between the lines).
Similarly, as will be explored in the ensuing paragraphs, matters of scale and aggregation determine the types of further linguistic analyses that can be performed on the output of the frame extractor. Within the field of computational linguistics, various techniques have been developed to represent the meaning of words as vectors that capture the contexts in which these words are typically used. Such analyses might reveal patterns of statistical significance, but it is also likely that in creating novel, numerical representations of the original utterances, the semantic structure of argumentatively linked beliefs is lost.
In sum, developing opinion observatories and (potential) debate facilitators entails finding a trade-off, or, in fact, a middle way between macro- and micro-level analyses. On the one hand, one needs to leverage automated analysis methods at the scale of larger collections to maximum advantage. But one also needs to integrate opportunities to interactively zoom into specific aspects of interest and provide more fine-grained information at these levels down to the actual statements. This interplay between macro- and micro-level analyses is explored in the case studies below.
Analyses and applications ::: Spatial renditions of TheGuardian.com's opinion landscape
The main purpose of the observatory under discussion is to provide insight into the belief structures that characterize the opinion landscape on climate change. For reasons outlined above, this raises questions of how to represent opinions and, correspondingly, determining which representation is most suited as the atomic unit of comparison between opinions. In general terms, the desired outcome of further processing of the output of the semantic frame extractor is a network representation in which similar cause or effect strings are displayed in close proximity to one another. A high-level description of the pipeline under discussion thus goes as follows. In a first step, it can be decided whether one wants to map cause statements or effect statements. Next, the selected statements are grouped per commenter (i.e. a list is made of all cause statements or effect statements per commenter). These statements are filtered in order to retain only nouns, adjectives and verbs (thereby also omitting frequently occurring verbs such as ‘to be’). The remaining words are then lemmatized, that is, reduced to their dictionary forms. This output is finally translated into a network representation, whereby nodes represent (aggregated) statements, and edges express the semantic relatedness between statements (based on a set overlap whereby the number of shared lemmata are counted).
As illustrated by two spatial renditions that were created using this approach and visualized using the network analysis tool Gephi BIBREF26, the labels assigned to these nodes (lemmata, full statements, or other) can be appropriated to the scope of the analysis.
Analyses and applications ::: Spatial renditions of TheGuardian.com's opinion landscape ::: A macro-level overview: causes addressed in the climate change debate
Suppose one wants to get a first idea about the scope and diversity of an opinion landscape, without any preconceived notions of this landscape's structure or composition. One way of doing this would be to map all of the causes that are mentioned in comments related to articles on climate change, that is, creating an overview of all the causes that have been retrieved by the frame extractor in a single representation. Such a representation would not immediately provide the granularity to state what the beliefs or opinions in the debates actually are, but rather, it might inspire a sense of what those opinions might be about, thus pointing towards potentially interesting phenomena that might warrant closer examination.
Figure FIGREF10, a high-level overview of the opinion landscape, reveals a number of areas to which opinions and beliefs might pertain. The top-left clusters in the diagram for instance reveal opinions about the role of people and countries, whereas on the right-hand side, we find a complementary cluster that might point to beliefs concerning the influence of high or increased CO2-emissions. In between, there is a cluster on power and energy sources, reflecting the energy debate's association to both issues of human responsibility and CO2 emissions. As such, the overview can already inspire, potentially at best, some very general hypotheses about the types of opinions that figure in the climate change debate.
Analyses and applications ::: Spatial renditions of TheGuardian.com's opinion landscape ::: Micro-level investigations: opinions on nuclear power and global warming
Based on the range of topics on which beliefs are expressed, a micro-level analysis can be conducted to reveal what those beliefs are and, for instance, whether they align or contradict each other. This can be achieved by singling out a cause of interest, and mapping out its associated effects.
As revealed by the global overview of the climate change opinion landscape, a portion of the debate concerns power and energy sources. One topic with a particularly interesting role in this debate is nuclear power. Figure FIGREF12 illustrates how a more detailed representation of opinions on this matter can be created by spatially representing all of the effects associated with causes containing the expression `nuclear power'. Again, similar beliefs (in terms of words used in the effects) are positioned closer to each other, thus facilitating the detection of clusters. Commenters on The Guardian for instance express concerns about the deaths or extinction that might be caused by this energy resource. They also voice opinions on its cleanliness, whether or not it might decrease pollution or be its own source of pollution, and how it reduces CO2-emissions in different countries.
Whereas the detailed opinion landscape on `nuclear power' is relatively limited in terms of the number of mined opinions, other topics might reveal more elaborate belief systems. This is for instance the case for the phenomenon of `global warming'. As shown in Figure FIGREF13, opinions on global warming are clustered around the idea of `increases', notably in terms of evaporation, drought, heat waves, intensity of cyclones and storms, etc. An adjacent cluster is related to `extremes', such as extreme summers and weather events, but also extreme colds.
From opinion observation to debate facilitation
The observatory introduced in the preceding paragraphs provides preliminary insights into the range and scope of the beliefs that figure in climate change debates on TheGuardian.com. The observatory as such takes a distinctly descriptive stance, and aims to satisfy, at least in part, the information needs of researchers, activists, journalists and other stakeholders whose main concern is to document, investigate and understand on-line opinion dynamics. However, in the current information sphere, which is marked by polarization, misinformation and a close entanglement with real-world conflicts, taking a mere descriptive or neutral stance might not serve every stakeholder's needs. Indeed, given the often skewed relations between power and information, questions arise as to how media observations might in turn be translated into (political, social or economic) action. Knowledge about opinion dynamics might for instance inform interventions that remedy polarization or disarm conflict. In other words, the construction of (social) media observatories unavoidably lifts questions about the possibilities, limitations and, especially, implications of the machine-guided and human-incentivized facilitation of on-line discussions and debates.
Addressing these questions, the present paragraph introduces and explores the concept of a debate facilitator, that is, a device that extends the capabilities of the previously discussed observatory to also promote more interesting and constructive discussions. Concretely, we will conceptualize a device that reveals how the personal opinion landscapes of commenters relate to each other (in terms of overlap or lack thereof), and we will discuss what steps might potentially be taken on the basis of such representation to balance the debate. Geared towards possible interventions in the debate, such a device may thus go well beyond the observatory's objectives of making opinion processes and conflicts more transparent, which concomitantly raises a number of serious concerns that need to be acknowledged.
On rather fundamental ground, tools that steer debates in one way or another may easily become manipulative and dangerous instruments in the hands of certain interest groups. Various aspects of our daily lives are for instance already implicitly guided by recommender systems, the purpose and impact of which can be rather opaque. For this reason, research efforts across disciplines are directed at scrutinizing and rendering such systems more transparent BIBREF28. Such scrutiny is particularly pressing in the context of interventions on on-line communication platforms, which have already been argued to enforce affective communication styles that feed rather than resolve conflict. The objectives behind any facilitation device should therefore be made maximally transparent and potential biases should be fully acknowledged at every level, from data ingest to the dissemination of results BIBREF29. More concretely, the endeavour of constructing opinion observatories and facilitators foregrounds matters of `openness' of data and tools, security, ensuring data quality and representative sampling, accounting for evolving data legislation and policy, building communities and trust, and envisioning beneficial implications. By documenting the development process for a potential facilitation device, the present paper aims to contribute to these on-going investigations and debates. Furthermore, every effort has been made to protect the identities of the commenters involved. In the words of media and technology visionary Jaron Lanier, developers and computational social scientists entering this space should remain fundamentally aware of the fact that `digital information is really just people in disguise' BIBREF30.
With these reservations in mind, the proposed approach can be situated among ongoing efforts that lead from debate observation to facilitation. One such pathway, for instance, involves the construction of filters to detect hate speech, misinformation and other forms of expression that might render debates toxic BIBREF31, BIBREF32. Combined with community outreach, language-based filtering and detection tools have proven to raise awareness among social media users about the nature and potential implications of their on-line contributions BIBREF33. Similarly, advances can be expected from approaches that aim to extend the scope of analysis beyond descriptions of a present debate situation in order to model how a debate might evolve over time and how intentions of the participants could be included in such an analysis.
Progress in any of these areas hinges on a further integration of real-world data in the modelling process, as well as a further socio-technical and media-theoretical investigation of how activity on social media platforms and technologies correlate to real-world conflicts. The remainder of this section therefore ventures to explore how conceptual argument communication models for polarization and alignment BIBREF34 might be reconciled with real-world data, and how such models might inform debate facilitation efforts.
From opinion observation to debate facilitation ::: Debate facilitation through models of alignment and polarization
As discussed in previous sections, news websites like TheGuardian.com establish a communicative settings in which agents (users, commenters) exchange arguments about different issues or topics. For those seeking to establish a healthy debate, it could thus be of interest to know how different users relate to each other in terms of their beliefs about a certain issue or topic (in this case climate change). Which beliefs are for instance shared by users and which ones are not? In other words, can we map patterns of alignment or polarization among users?
Figure FIGREF15 ventures to demonstrate how representations of opinion landscapes (generated using the methods outlined above) can be enriched with user information to answer such questions. Specifically, the graph represents the beliefs of two among the most active commenters in the corpus. The opinions of each user are marked using a colour coding scheme: red nodes represent the beliefs of the first user, blue nodes represent the beliefs of the second user. Nodes with a green colour represent beliefs that are shared by both users.
Taking into account again the factors of aggregation that were discussed in the previous section, Figure FIGREF15 supports some preliminary observations about the relationship between the two users in terms of their beliefs. Generally, given the fact that the graph concerns the two most active commenters on the website, it can be seen that the rendered opinion landscape is quite extensive. It is also clear that the belief systems of both users are not unrelated, as nodes of all colours can be found distributed throughout the graph. This is especially the case for the right-hand top cluster and right-hand bottom cluster of the graph, where green, red, and blue nodes are mixed. Since both users are discussing on articles on climate change, a degree of affinity between opinions or beliefs is to be expected.
Upon closer examination, a number of disparities between the belief systems of the two commenters can be detected. Considering the left-hand top cluster and center of the graph, it becomes clear that exclusively the red commenter is using a selection of terms related to the economical and socio-political realm (e.g. `people', `american', `nation', `government') and industry (e.g. `fuel', `industry', `car', etc.). The blue commenter, on the other hand, exclusively engages in using a range of terms that could be deemed more technical and scientific in nature (e.g. `feedback', `property', `output', `trend', `variability', etc.). From the graph, it also follows that the blue commenter does not enter into the red commenter's `social' segments of the graph as frequently as the red commenter enters the more scientifically-oriented clusters of the graph (although in the latter cases the red commenter does not use the specific technical terminology of the blue commenter). The cluster where both beliefs mingle the most (and where overlap can be observed), is the top right cluster. This overlap is constituted by very general terms (e.g. `climate', `change', and `science'). In sum, the graph reveals that the commenters' beliefs are positioned most closely to each other on the most general aspects of the debate, whereas there is less relatedness on the social and more technical aspects of the debate. In this regard, the depicted situation seemingly evokes currently on-going debates about the role or responsibilities of the people or individuals versus that of experts when it comes to climate change BIBREF35, BIBREF36, BIBREF37.
What forms of debate facilitation, then, could be based on these observations? And what kind of collective effects can be expected? As follows from the above, beliefs expressed by the two commenters shown here (which are selected based on their active participation rather than actual engagement or dialogue with one another) are to some extent complementary, as the blue commenter, who displays a scientifically-oriented system of beliefs, does not readily engage with the social topics discussed by the red commenter. As such, the overall opinion landscape of the climate change could potentially be enriched with novel perspectives if the blue commenter was invited to engage in a debate about such topics as industry and government. Similarly, one could explore the possibility of providing explanatory tools or additional references on occasions where the debate takes a more technical turn.
However, argument-based models of collective attitude formation BIBREF38, BIBREF34 also tell us to be cautious about such potential interventions. Following the theory underlying these models, different opinion groups prevailing during different periods of a debate will activate different argumentative associations. Facilitating exchange between users with complementary arguments supporting similar opinions may enforce biased argument pools BIBREF39 and lead to increasing polarization at the collective level. In the example considered here the two commenters agree on the general topic, but the analysis suggests that they might have different opinions about the adequate direction of specific climate change action. A more fine–grained automatic detection of cognitive and evaluative associations between arguments and opinions is needed for a reliable use of models to predict what would come out of facilitating exchange between two specific users. In this regard, computational approaches to the linguistic analysis of texts such as semantic frame extraction offer productive opportunities for empirically modelling opinion dynamics. Extraction of causation frames allows one to disentangle cause-effect relations between semantic units, which provides a productive step towards mapping and measuring structures of cognitive associations. These opportunities are to be explored by future work.
Conclusion
Ongoing transitions from a print-based media ecology to on-line news and discussion platforms have put traditional forms of news production and consumption at stake. Many challenges related to how information is currently produced and consumed come to a head in news website comment sections, which harbour the potential of providing new insights into how cultural conflicts emerge and evolve. On the basis of an observatory for analyzing climate change-related comments from TheGuardian.com, this article has critically examined possibilities and limitations of the machine-assisted exploration and possible facilitation of on-line opinion dynamics and debates.
Beyond technical and modelling pathways, this examination brings into view broader methodological and epistemological aspects of the use of digital methods to capture and study the flow of on-line information and opinions. Notably, the proposed approaches lift questions of computational analysis and interpretation that can be tied to an overarching tension between `distant' and `close reading' BIBREF40. In other words, monitoring on-line opinion dynamics means embracing the challenges and associated trade-offs that come with investigating large quantities of information through computational, text-analytical means, but doing this in such a way that nuance and meaning are not lost in the process.
Establishing productive cross-overs between the level of opinions mined at scale (for instance through the lens of causation frames) and the detailed, closer looks at specific conversations, interactions and contexts depends on a series of preliminaries. One of these is the continued availability of high-quality, accessible data. As the current on-line media ecology is recovering from recent privacy-related scandals (e.g. Cambridge Analytica), such data for obvious reasons is not always easy to come by. In the same legal and ethical vein, reproducibility and transparency of models is crucial to the further development of analytical tools and methods. As the experiments discussed in this paper have revealed, a key factor in this undertaking are human faculties of interpretation. Just like the encoding schemes introduced by Axelrod and others before the wide-spread use of computational methods, present-day pipelines and tools foreground the role of human agents as the primary source of meaning attribution.
<This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 732942 (Opinion Dynamics and Cultural Conflict in European Spaces – www.Odycceus.eu).> | Unanswerable |
8b99767620fd4efe51428b68841cc3ec06699280 | 8b99767620fd4efe51428b68841cc3ec06699280_0 | Q: What is the technique used for text analysis and mining?
Text: Introduction ::: Background
Over the past two decades, the rise of social media and the digitization of news and discussion platforms have radically transformed how individuals and groups create, process and share news and information. As Alan Rusbridger, former-editor-in-chief of the newspaper The Guardian has it, these technologically-driven shifts in the ways people communicate, organize themselves and express their beliefs and opinions, have
empower[ed] those that were never heard, creating a a new form of politics and turning traditional news corporations inside out. It is impossible to think of Donald Trump; of Brexit; of Bernie Sanders; of Podemos; of the growth of the far right in Europe; of the spasms of hope and violent despair in the Middle East and North Africa without thinking also of the total inversion of how news is created, shared and distributed. Much of it is liberating and and inspiring. Some of it is ugly and dark. And something - the centuries-old craft of journalism - is in danger of being lost BIBREF0.
Rusbridger's observation that the present media-ecology puts traditional notions of politics, journalism, trust and truth at stake is a widely shared one BIBREF1, BIBREF2, BIBREF3. As such, it has sparked interdisciplinary investigations, diagnoses and ideas for remedies across the economical, socio-political, and technological spectrum, challenging our existing assumptions and epistemologies BIBREF4, BIBREF5. Among these lines of inquiry, particular strands of research from the computational social sciences are addressing pressing questions of how emerging technologies and digital methods might be operationalized to regain a grip on the dynamics that govern the flow of on-line news and its associated multitudes of voices, opinions and conflicts. Could the information circulating on on-line (social) news platforms for instance be mined to better understand and analyze the problems facing our contemporary society? Might such data mining and analysis help us to monitor the growing number of social conflicts and crises due to cultural differences and diverging world-views? And finally, would such an approach potentially facilitate early detection of conflicts and even ways to resolve them before they turn violent?
Answering these questions requires further advances in the study of cultural conflict based on digital media data. This includes the development of fine-grained representations of cultural conflict based on theoretically-informed text analysis, the integration of game-theoretical approaches to models of polarization and alignment, as well as the construction of accessible tools and media-monitoring observatories: platforms that foster insight into the complexities of social behaviour and opinion dynamics through automated computational analyses of (social) media data. Through an interdisciplinary approach, the present article aims to make both a practical and theoretical contribution to these aspects of the study of opinion dynamics and conflict in new media environments.
Introduction ::: Objective
The objective of the present article is to critically examine possibilities and limitations of machine-guided exploration and potential facilitation of on-line opinion dynamics on the basis of an experimental data analytics pipeline or observatory for mining and analyzing climate change-related user comments from the news website of The Guardian (TheGuardian.com). Combining insights from the social and political sciences with computational methods for the linguistic analysis of texts, this observatory provides a series of spatial (network) representations of the opinion landscapes on climate change on the basis of causation frames expressed in news website comments. This allows for the exploration of opinion spaces at different levels of detail and aggregation.
Technical and theoretical questions related to the proposed method and infrastructure for the exploration and facilitation of debates will be discussed in three sections. The first section concerns notions of how to define what constitutes a belief or opinion and how these can be mined from texts. To this end, an approach based on the automated extraction of semantic frames expressing causation is proposed. The observatory thus builds on the theoretical premise that expressions of causation such as `global warming causes rises in sea levels' can be revelatory for a person or group's underlying belief systems. Through a further technical description of the observatory's data-analytical components, section two of the paper deals with matters of spatially modelling the output of the semantic frame extractor and how this might be achieved without sacrificing nuances of meaning. The final section of the paper, then, discusses how insights gained from technologically observing opinion dynamics can inform conceptual modelling efforts and approaches to on-line opinion facilitation. As such, the paper brings into view and critically evaluates the fundamental conceptual leap from machine-guided observation to debate facilitation and intervention.
Through the case examples from The Guardian's website and the theoretical discussions explored in these sections, the paper intends to make a twofold contribution to the fields of media studies, opinion dynamics and computational social science. Firstly, the paper introduces and chains together a number of data analytics components for social media monitoring (and facilitation) that were developed in the context of the <project name anonymized for review> infrastructure project. The <project name anonymized for review> infrastructure makes the components discussed in this paper available as open web services in order to foster reproducibility and further experimentation and development <infrastructure reference URL anonymized for review>. Secondly, and supplementing these technological and methodological gains, the paper addresses a number of theoretical, epistemological and ethical questions that are raised by experimental approaches to opinion exploration and facilitation. This notably includes methodological questions on the preservation of meaning through text and data mining, as well as the role of human interpretation, responsibility and incentivisation in observing and potentially facilitating opinion dynamics.
Introduction ::: Data: the communicative setting of TheGuardian.com
In order to study on-line opinion dynamics and build the corresponding climate change opinion observatory discussed in this paper, a corpus of climate-change related news articles and news website comments was analyzed. Concretely, articles from the ‘climate change’ subsection from the news website of The Guardian dated from 2009 up to April 2019 were processed, along with up to 200 comments and associated metadata for articles where commenting was enabled at the time of publication. The choice for studying opinion dynamics using data from The Guardian is motivated by this news website's prominent position in the media landscape as well as its communicative setting, which is geared towards user engagement. Through this interaction with readers, the news platform embodies many of the recent shifts that characterize our present-day media ecology.
TheGuardian.com is generally acknowledged to be one of the UK's leading online newspapers, with 8,2 million unique visitors per month as of May 2013 BIBREF6. The website consists of a core news site, as well as a range of subsections that allow for further classification and navigation of articles. Articles related to climate change can for instance be accessed by navigating through the `News' section, over the subsection `environment', to the subsubsection `climate change' BIBREF7. All articles on the website can be read free of charge, as The Guardian relies on a business model that combines revenues from advertising, voluntary donations and paid subscriptions.
Apart from offering high-quality, independent journalism on a range of topics, a distinguishing characteristic of The Guardian is its penchant for reader involvement and engagement. Adopting to the changing media landscape and appropriating business models that fit the transition from print to on-line news media, the Guardian has transformed itself into a platform that enables forms of citizen journalism, blogging, and welcomes readers comments on news articles BIBREF0. In order for a reader to comment on articles, it is required that a user account is made, which provides a user with a unique user name and a user profile page with a stable URL. According to the website's help pages, providing users with an identity that is consistently recognized by the community fosters proper on-line community behaviour BIBREF8. Registered users can post comments on content that is open to commenting, and these comments are moderated by a dedicated moderation team according to The Guardian's community standards and participation guidelines BIBREF9. In support of digital methods and innovative approaches to journalism and data mining, The Guardian has launched an open API (application programming interface) through which developers can access different types of content BIBREF10. It should be noted that at the moment of writing this article, readers' comments are not accessible through this API. For the scientific and educational purposes of this paper, comments were thus consulted using a dedicated scraper.
Taking into account this community and technologically-driven orientation, the communicative setting of The Guardian from which opinions are to be mined and the underlying belief system revealed, is defined by articles, participating commenters and comment spheres (that is, the actual comments aggregated by user, individual article or collection of articles) (see Figure FIGREF4).
In this setting, articles (and previous comments on those articles) can be commented on by participating commenters, each of which bring to the debate his or her own opinions or belief system. What this belief system might consists of can be inferred on a number of levels, with varying degrees of precision. On the most general level, a generic description of the profile of the average reader of The Guardian can be informative. Such profiles have been compiled by market researchers with the purpose of informing advertisers about the demographic that might be reached through this news website (and other products carrying The Guardian's brand). As of the writing of this article, the audience The Guardian is presented to advertisers as a `progressive' audience:
Living in a world of unprecedented societal change, with the public narratives around politics, gender, body image, sexuality and diet all being challenged. The Guardian is committed to reflecting the progressive agenda, and reaching the crowd that uphold those values. It’s helpful that we reach over half of progressives in the UK BIBREF11.
A second, equally high-level indicator of the beliefs that might be present on the platform, are the links through which articles on climate change can be accessed. An article on climate change might for instance be consulted through the environment section of the news website, but also through the business section. Assuming that business interests might potentially be at odds with environmental concerns, it could be hypothesized that the particular comment sphere for that article consists of at least two potentially clashing frames of mind or belief systems.
However, as will be expanded upon further in this article, truly capturing opinion dynamics requires a more systemic and fine-grained approach. The present article therefore proposes a method for harvesting opinions from the actual comment texts. The presupposition is thereby that comment spheres are marked by a diversity of potentially related opinions and beliefs. Opinions might for instance be connected through the reply structure that marks the comment section of an article, but this connection might also manifest itself on a semantic level (that is, the level of meaning or the actual contents of the comments). To capture this multidimensional, interconnected nature of the comment spheres, it is proposed to represent comment spheres as networks, where the nodes represent opinions and beliefs, and edges the relationships between these beliefs (see the spatial representation of beliefs infra). The use of precision language tools to extract such beliefs and their mutual relationships, as will be explored in the following sections, can open up new pathways of model validation and creation.
Mining opinions and beliefs from texts
In traditional experimental settings, survey techniques and associated statistical models provide researchers with established methods to gauge and analyze the opinions of a population. When studying opinion landscapes through on-line social media, however, harvesting beliefs from big textual data such as news website comments and developing or appropriating models for their analysis is a non-trivial task BIBREF12, BIBREF13, BIBREF14.
In the present context, two challenges related to data-gathering and text mining need to be addressed: (1) defining what constitutes an expression of an opinion or belief, and (2) associating this definition with a pattern that might be extracted from texts. Recent scholarship in the fields of natural language processing (NLP) and argumentation mining has yielded a range of instruments and methods for the (automatic) identification of argumentative claims in texts BIBREF15, BIBREF16. Adding to these instruments and methods, the present article proposes an approach in which belief systems or opinions on climate change are accessed through expressions of causation.
Mining opinions and beliefs from texts ::: Causal mapping methods and the climate change debate
The climate change debate is often characterized by expressions of causation, that is, expressions linking a certain cause with a certain effect. Cultural or societal clashes on climate change might for instance concern diverging assessments of whether global warming is man-made or not BIBREF17. Based on such examples, it can be stated that expressions of causation are closely associated with opinions or beliefs, and that as such, these expressions can be considered a valuable indicator for the range and diversity of the opinions and beliefs that constitute the climate change debate. The observatory under discussion therefore focuses on the extraction and analysis of linguistic patterns called causation frames. As will be further demonstrated in this section, the benefit of this causation-based approach is that it offers a systemic approach to opinion dynamics that comprises different layers of meaning, notably the cognitive or social meaningfulness of patterns on account of their being expressions of causation, as well as further lexical and semantic information that might be used for analysis and comparison.
The study of expressions of causation as a method for accessing and assessing belief systems and opinions has been formalized and streamlined since the 1970s. Pioneered by political scientist Robert Axelrod and others, this causal mapping method (also referred to as `cognitive mapping') was introduced as a means of reconstructing and evaluating administrative and political decision-making processes, based on the principle that
the notion of causation is vital to the process of evaluating alternatives. Regardless of philosophical difficulties involved in the meaning of causation, people do evaluate complex policy alternatives in terms of the consequences a particular choice would cause, and ultimately of what the sum of these effects would be. Indeed, such causal analysis is built into our language, and it would be very difficult for us to think completely in other terms, even if we tried BIBREF18.
Axelrod's causal mapping method comprises a set of conventions to graphically represent networks of causes and effects (the nodes in a network) as well as the qualitative aspects of this relation (the network’s directed edges, notably assertions of whether the causal linkage is positive or negative). These causes and effects are to be extracted from relevant sources by means of a series of heuristics and an encoding scheme (it should be noted that for this task Axelrod had human readers in mind). The graphs resulting from these efforts provide a structural overview of the relations among causal assertions (and thus beliefs):
The basic elements of the proposed system are quite simple. The concepts a person uses are represented as points, and the causal links between these concepts are represented as arrows between these points. This gives a pictorial representation of the causal assertions of a person as a graph of points and arrows. This kind of representation of assertions as a graph will be called a cognitive map. The policy alternatives, all of the various causes and effects, the goals, and the ultimate utility of the decision maker can all be thought of as concept variables, and represented as points in the cognitive map. The real power of this approach appears when a cognitive map is pictured in graph form; it is then relatively easy to see how each of the concepts and causal relationships relate to each other, and to see the overall structure of the whole set of portrayed assertions BIBREF18.
In order to construct these cognitive maps based on textual information, Margaret Tucker Wrightson provides a set of reading and coding rules for extracting cause concepts, linkages (relations) and effect concepts from expressions in the English language. The assertion `Our present topic is the militarism of Germany, which is maintaining a state of tension in the Baltic Area' might for instance be encoded as follows: `the militarism of Germany' (cause concept), /+/ (a positive relationship), `maintaining a state of tension in the Baltic area' (effect concept) BIBREF19. Emphasizing the role of human interpretation, it is acknowledged that no strict set of rules can capture the entire spectrum of causal assertions:
The fact that the English language is as varied as those who use it makes the coder's task complex and difficult. No set of rules will completely solve the problems he or she might encounter. These rules, however, provide the coder with guidelines which, if conscientiously followed, will result in outcomes meeting social scientific standards of comparative validity and reliability BIBREF19.
To facilitate the task of encoders, the causal mapping method has gone through various iterations since its original inception, all the while preserving its original premises. Recent software packages have for instance been devised to support the data encoding and drawing process BIBREF20. As such, causal or cognitive mapping has become an established opinion and decision mining method within political science, business and management, and other domains. It has notably proven to be a valuable method for the study of recent societal and cultural conflicts. Thomas Homer-Dixon et al. for instance rely on cognitive-affective maps created from survey data to analyze interpretations of the housing crisis in Germany, Israeli attitudes toward the Western Wall, and moderate versus skeptical positions on climate change BIBREF21. Similarly, Duncan Shaw et al. venture to answer the question of `Why did Brexit happen?' by building causal maps of nine televised debates that were broadcast during the four weeks leading up to the Brexit referendum BIBREF22.
In order to appropriate the method of causal mapping to the study of on-line opinion dynamics, it needs to expanded from applications at the scale of human readers and relatively small corpora of archival documents and survey answers, to the realm of `big' textual data and larger quantities of information. This attuning of cognitive mapping methods to the large-scale processing of texts required for media monitoring necessarily involves a degree of automation, as will be explored in the next section.
Mining opinions and beliefs from texts ::: Automated causation tracking with the Penelope semantic frame extractor
As outlined in the previous section, causal mapping is based on the extraction of so-called cause concepts, (causal) relations, and effect concepts from texts. The complexity of each of these these concepts can range from the relatively simple (as illustrated by the easily-identifiable cause and effect relation in the example of `German militarism' cited earlier), to more complex assertions such as `The development of international cooperation in all fields across the ideological frontiers will gradually remove the hostility and fear that poison international relations', which contains two effect concepts (viz. `the hostility that poisons international relations' and `the fear that poisons international relations'). As such, this statement would have to be encoded as a double relationship BIBREF19.
The coding guidelines in BIBREF19 further reflect that extracting cause and effect concepts from texts is an operation that works on both the syntactical and semantic levels of assertions. This can be illustrated by means of the guidelines for analyzing the aforementioned causal assertion on German militarism:
1. The first step is the realization of the relationship. Does a subject affect an object? 2. Having recognized that it does, the isolation of the cause and effects concepts is the second step. As the sentence structure indicates, "the militarism of Germany" is the causal concept, because it is the initiator of the action, while the direct object clause, "a state of tension in the Baltic area," constitutes that which is somehow influenced, the effect concept BIBREF19.
In the field of computational linguistics, from which the present paper borrows part of its methods, this procedure for extracting information related to causal assertions from texts can be considered an instance of an operation called semantic frame extraction BIBREF23. A semantic frame captures a coherent part of the meaning of a sentence in a structured way. As documented in the FrameNet project BIBREF24, the Causation frame is defined as follows:
A Cause causes an Effect. Alternatively, an Actor, a participant of a (implicit) Cause, may stand in for the Cause. The entity Affected by the Causation may stand in for the overall Effect situation or event BIBREF25.
In a linguistic utterance such as a statement in a news website comment, the Causation frame can be evoked by a series of lexical units, such as `cause', `bring on', etc. In the example `If such a small earthquake CAUSES problems, just imagine a big one!', the Causation frame is triggered by the verb `causes', which therefore is called the frame evoking element. The Cause slot is filled by `a small earthquake', the Effect slot by `problems' BIBREF25.
In order to automatically mine cause and effects concepts from the corpus of comments on The Guardian, the present paper uses the Penelope semantic frame extractor: a tool that exploits the fact that semantic frames can be expressed as form-meaning mappings called constructions. Notably, frames were extracted from Guardian comments by focusing on the following lexical units (verbs, prepositions and conjunctions), listed in FrameNet as frame evoking elements of the Causation frame: Cause.v, Due to.prep, Because.c, Because of.prep, Give rise to.v, Lead to.v or Result in.v.
As illustrated by the following examples, the strings output by the semantic frame extractor adhere closely to the original utterance, preserving all of the the comments' causation frames real-world noisiness:
The output of the semantic frame extractor as such is used as the input for the ensuing pipeline components in the climate change opinion observatory. The aim of a further analysis of these frames is to find patterns in the beliefs and opinions they express. As will be discussed in the following section, which focuses on applications and cases, maintaining semantic nuances in this further analytic process foregrounds the role of models and aggregation levels.
Analyses and applications
Based on the presupposition that relations between causation frames reveal beliefs, the output of the semantic frame extractor creates various opportunities for exploring opinion landscapes and empirically validating conceptual models for opinion dynamics.
In general, any alignment of conceptual models and real-world data is an exercise in compromising, as the idealized, abstract nature of models is likely to be at odds with the messiness of the actual data. Finding such a compromise might for instance involve a reduction of the simplicity or elegance of the model, or, on the other hand, an increased aggregation (and thus reduced granularity) of the data.
Addressing this challenge, the current section reflects on questions of data modelling, aggregation and meaning by exploring, through case examples, different spatial representations of opinion landscapes mined from the TheGuardian.com's comment sphere. These spatial renditions will be understood as network visualizations in which nodes represent argumentative statements (beliefs) and edges the degree of similarity between these statements. On the most general level, then, such a representation can consists of an overview of all the causes expressed in the corpus of climate change-related Guardian comments. This type of visualization provides a birds-eye view of the entire opinion landscape as mined from the comment texts. In turn, such a general overview might elicit more fine-grained, micro-level investigations, in which a particular cause is singled out and its more specific associated effects are mapped. These macro and micro level overviews come with their own proper potential for theory building and evaluation, as well as distinct requirements for the depth or detail of meaning that needs to be represented. To get the most general sense of an opinion landscape one might for instance be more tolerant of abstract renditions of beliefs (e.g. by reducing statements to their most frequently used terms), but for more fine-grained analysis one requires more context and nuance (e.g. adhering as closely as possible to the original comment).
Analyses and applications ::: Aggregation
As follows from the above, one of the most fundamental questions when building automated tools to observe opinion dynamics that potentially aim at advising means of debate facilitation concerns the level of meaning aggregation. A clear argumentative or causal association between, for instance, climate change and catastrophic events such as floods or hurricanes may become detectable by automatic causal frame tracking at the scale of large collections of articles where this association might appear statistically more often, but detection comes with great challenges when the aim is to classify certain sets of only a few statements in more free expression environments such as comment spheres.
In other words, the problem of meaning aggregation is closely related to issues of scale and aggregation over utterances. The more fine-grained the semantic resolution is, that is, the more specific the cause or effect is that one is interested in, the less probable it is to observe the same statement twice. Moreover, with every independent variable (such as time, different commenters or user groups, etc.), less data on which fine-grained opinion statements are to be detected is available. In the present case of parsed comments from TheGuardian.com, providing insights into the belief system of individual commenters, even if all their statements are aggregated over time, relies on a relatively small set of argumentative statements. This relative sparseness is in part due to the fact that the scope of the semantic frame extractor is confined to the frame evoking elements listed earlier, thus omitting more implicit assertions of causation (i.e. expressions of causation that can only be derived from context and from reading between the lines).
Similarly, as will be explored in the ensuing paragraphs, matters of scale and aggregation determine the types of further linguistic analyses that can be performed on the output of the frame extractor. Within the field of computational linguistics, various techniques have been developed to represent the meaning of words as vectors that capture the contexts in which these words are typically used. Such analyses might reveal patterns of statistical significance, but it is also likely that in creating novel, numerical representations of the original utterances, the semantic structure of argumentatively linked beliefs is lost.
In sum, developing opinion observatories and (potential) debate facilitators entails finding a trade-off, or, in fact, a middle way between macro- and micro-level analyses. On the one hand, one needs to leverage automated analysis methods at the scale of larger collections to maximum advantage. But one also needs to integrate opportunities to interactively zoom into specific aspects of interest and provide more fine-grained information at these levels down to the actual statements. This interplay between macro- and micro-level analyses is explored in the case studies below.
Analyses and applications ::: Spatial renditions of TheGuardian.com's opinion landscape
The main purpose of the observatory under discussion is to provide insight into the belief structures that characterize the opinion landscape on climate change. For reasons outlined above, this raises questions of how to represent opinions and, correspondingly, determining which representation is most suited as the atomic unit of comparison between opinions. In general terms, the desired outcome of further processing of the output of the semantic frame extractor is a network representation in which similar cause or effect strings are displayed in close proximity to one another. A high-level description of the pipeline under discussion thus goes as follows. In a first step, it can be decided whether one wants to map cause statements or effect statements. Next, the selected statements are grouped per commenter (i.e. a list is made of all cause statements or effect statements per commenter). These statements are filtered in order to retain only nouns, adjectives and verbs (thereby also omitting frequently occurring verbs such as ‘to be’). The remaining words are then lemmatized, that is, reduced to their dictionary forms. This output is finally translated into a network representation, whereby nodes represent (aggregated) statements, and edges express the semantic relatedness between statements (based on a set overlap whereby the number of shared lemmata are counted).
As illustrated by two spatial renditions that were created using this approach and visualized using the network analysis tool Gephi BIBREF26, the labels assigned to these nodes (lemmata, full statements, or other) can be appropriated to the scope of the analysis.
Analyses and applications ::: Spatial renditions of TheGuardian.com's opinion landscape ::: A macro-level overview: causes addressed in the climate change debate
Suppose one wants to get a first idea about the scope and diversity of an opinion landscape, without any preconceived notions of this landscape's structure or composition. One way of doing this would be to map all of the causes that are mentioned in comments related to articles on climate change, that is, creating an overview of all the causes that have been retrieved by the frame extractor in a single representation. Such a representation would not immediately provide the granularity to state what the beliefs or opinions in the debates actually are, but rather, it might inspire a sense of what those opinions might be about, thus pointing towards potentially interesting phenomena that might warrant closer examination.
Figure FIGREF10, a high-level overview of the opinion landscape, reveals a number of areas to which opinions and beliefs might pertain. The top-left clusters in the diagram for instance reveal opinions about the role of people and countries, whereas on the right-hand side, we find a complementary cluster that might point to beliefs concerning the influence of high or increased CO2-emissions. In between, there is a cluster on power and energy sources, reflecting the energy debate's association to both issues of human responsibility and CO2 emissions. As such, the overview can already inspire, potentially at best, some very general hypotheses about the types of opinions that figure in the climate change debate.
Analyses and applications ::: Spatial renditions of TheGuardian.com's opinion landscape ::: Micro-level investigations: opinions on nuclear power and global warming
Based on the range of topics on which beliefs are expressed, a micro-level analysis can be conducted to reveal what those beliefs are and, for instance, whether they align or contradict each other. This can be achieved by singling out a cause of interest, and mapping out its associated effects.
As revealed by the global overview of the climate change opinion landscape, a portion of the debate concerns power and energy sources. One topic with a particularly interesting role in this debate is nuclear power. Figure FIGREF12 illustrates how a more detailed representation of opinions on this matter can be created by spatially representing all of the effects associated with causes containing the expression `nuclear power'. Again, similar beliefs (in terms of words used in the effects) are positioned closer to each other, thus facilitating the detection of clusters. Commenters on The Guardian for instance express concerns about the deaths or extinction that might be caused by this energy resource. They also voice opinions on its cleanliness, whether or not it might decrease pollution or be its own source of pollution, and how it reduces CO2-emissions in different countries.
Whereas the detailed opinion landscape on `nuclear power' is relatively limited in terms of the number of mined opinions, other topics might reveal more elaborate belief systems. This is for instance the case for the phenomenon of `global warming'. As shown in Figure FIGREF13, opinions on global warming are clustered around the idea of `increases', notably in terms of evaporation, drought, heat waves, intensity of cyclones and storms, etc. An adjacent cluster is related to `extremes', such as extreme summers and weather events, but also extreme colds.
From opinion observation to debate facilitation
The observatory introduced in the preceding paragraphs provides preliminary insights into the range and scope of the beliefs that figure in climate change debates on TheGuardian.com. The observatory as such takes a distinctly descriptive stance, and aims to satisfy, at least in part, the information needs of researchers, activists, journalists and other stakeholders whose main concern is to document, investigate and understand on-line opinion dynamics. However, in the current information sphere, which is marked by polarization, misinformation and a close entanglement with real-world conflicts, taking a mere descriptive or neutral stance might not serve every stakeholder's needs. Indeed, given the often skewed relations between power and information, questions arise as to how media observations might in turn be translated into (political, social or economic) action. Knowledge about opinion dynamics might for instance inform interventions that remedy polarization or disarm conflict. In other words, the construction of (social) media observatories unavoidably lifts questions about the possibilities, limitations and, especially, implications of the machine-guided and human-incentivized facilitation of on-line discussions and debates.
Addressing these questions, the present paragraph introduces and explores the concept of a debate facilitator, that is, a device that extends the capabilities of the previously discussed observatory to also promote more interesting and constructive discussions. Concretely, we will conceptualize a device that reveals how the personal opinion landscapes of commenters relate to each other (in terms of overlap or lack thereof), and we will discuss what steps might potentially be taken on the basis of such representation to balance the debate. Geared towards possible interventions in the debate, such a device may thus go well beyond the observatory's objectives of making opinion processes and conflicts more transparent, which concomitantly raises a number of serious concerns that need to be acknowledged.
On rather fundamental ground, tools that steer debates in one way or another may easily become manipulative and dangerous instruments in the hands of certain interest groups. Various aspects of our daily lives are for instance already implicitly guided by recommender systems, the purpose and impact of which can be rather opaque. For this reason, research efforts across disciplines are directed at scrutinizing and rendering such systems more transparent BIBREF28. Such scrutiny is particularly pressing in the context of interventions on on-line communication platforms, which have already been argued to enforce affective communication styles that feed rather than resolve conflict. The objectives behind any facilitation device should therefore be made maximally transparent and potential biases should be fully acknowledged at every level, from data ingest to the dissemination of results BIBREF29. More concretely, the endeavour of constructing opinion observatories and facilitators foregrounds matters of `openness' of data and tools, security, ensuring data quality and representative sampling, accounting for evolving data legislation and policy, building communities and trust, and envisioning beneficial implications. By documenting the development process for a potential facilitation device, the present paper aims to contribute to these on-going investigations and debates. Furthermore, every effort has been made to protect the identities of the commenters involved. In the words of media and technology visionary Jaron Lanier, developers and computational social scientists entering this space should remain fundamentally aware of the fact that `digital information is really just people in disguise' BIBREF30.
With these reservations in mind, the proposed approach can be situated among ongoing efforts that lead from debate observation to facilitation. One such pathway, for instance, involves the construction of filters to detect hate speech, misinformation and other forms of expression that might render debates toxic BIBREF31, BIBREF32. Combined with community outreach, language-based filtering and detection tools have proven to raise awareness among social media users about the nature and potential implications of their on-line contributions BIBREF33. Similarly, advances can be expected from approaches that aim to extend the scope of analysis beyond descriptions of a present debate situation in order to model how a debate might evolve over time and how intentions of the participants could be included in such an analysis.
Progress in any of these areas hinges on a further integration of real-world data in the modelling process, as well as a further socio-technical and media-theoretical investigation of how activity on social media platforms and technologies correlate to real-world conflicts. The remainder of this section therefore ventures to explore how conceptual argument communication models for polarization and alignment BIBREF34 might be reconciled with real-world data, and how such models might inform debate facilitation efforts.
From opinion observation to debate facilitation ::: Debate facilitation through models of alignment and polarization
As discussed in previous sections, news websites like TheGuardian.com establish a communicative settings in which agents (users, commenters) exchange arguments about different issues or topics. For those seeking to establish a healthy debate, it could thus be of interest to know how different users relate to each other in terms of their beliefs about a certain issue or topic (in this case climate change). Which beliefs are for instance shared by users and which ones are not? In other words, can we map patterns of alignment or polarization among users?
Figure FIGREF15 ventures to demonstrate how representations of opinion landscapes (generated using the methods outlined above) can be enriched with user information to answer such questions. Specifically, the graph represents the beliefs of two among the most active commenters in the corpus. The opinions of each user are marked using a colour coding scheme: red nodes represent the beliefs of the first user, blue nodes represent the beliefs of the second user. Nodes with a green colour represent beliefs that are shared by both users.
Taking into account again the factors of aggregation that were discussed in the previous section, Figure FIGREF15 supports some preliminary observations about the relationship between the two users in terms of their beliefs. Generally, given the fact that the graph concerns the two most active commenters on the website, it can be seen that the rendered opinion landscape is quite extensive. It is also clear that the belief systems of both users are not unrelated, as nodes of all colours can be found distributed throughout the graph. This is especially the case for the right-hand top cluster and right-hand bottom cluster of the graph, where green, red, and blue nodes are mixed. Since both users are discussing on articles on climate change, a degree of affinity between opinions or beliefs is to be expected.
Upon closer examination, a number of disparities between the belief systems of the two commenters can be detected. Considering the left-hand top cluster and center of the graph, it becomes clear that exclusively the red commenter is using a selection of terms related to the economical and socio-political realm (e.g. `people', `american', `nation', `government') and industry (e.g. `fuel', `industry', `car', etc.). The blue commenter, on the other hand, exclusively engages in using a range of terms that could be deemed more technical and scientific in nature (e.g. `feedback', `property', `output', `trend', `variability', etc.). From the graph, it also follows that the blue commenter does not enter into the red commenter's `social' segments of the graph as frequently as the red commenter enters the more scientifically-oriented clusters of the graph (although in the latter cases the red commenter does not use the specific technical terminology of the blue commenter). The cluster where both beliefs mingle the most (and where overlap can be observed), is the top right cluster. This overlap is constituted by very general terms (e.g. `climate', `change', and `science'). In sum, the graph reveals that the commenters' beliefs are positioned most closely to each other on the most general aspects of the debate, whereas there is less relatedness on the social and more technical aspects of the debate. In this regard, the depicted situation seemingly evokes currently on-going debates about the role or responsibilities of the people or individuals versus that of experts when it comes to climate change BIBREF35, BIBREF36, BIBREF37.
What forms of debate facilitation, then, could be based on these observations? And what kind of collective effects can be expected? As follows from the above, beliefs expressed by the two commenters shown here (which are selected based on their active participation rather than actual engagement or dialogue with one another) are to some extent complementary, as the blue commenter, who displays a scientifically-oriented system of beliefs, does not readily engage with the social topics discussed by the red commenter. As such, the overall opinion landscape of the climate change could potentially be enriched with novel perspectives if the blue commenter was invited to engage in a debate about such topics as industry and government. Similarly, one could explore the possibility of providing explanatory tools or additional references on occasions where the debate takes a more technical turn.
However, argument-based models of collective attitude formation BIBREF38, BIBREF34 also tell us to be cautious about such potential interventions. Following the theory underlying these models, different opinion groups prevailing during different periods of a debate will activate different argumentative associations. Facilitating exchange between users with complementary arguments supporting similar opinions may enforce biased argument pools BIBREF39 and lead to increasing polarization at the collective level. In the example considered here the two commenters agree on the general topic, but the analysis suggests that they might have different opinions about the adequate direction of specific climate change action. A more fine–grained automatic detection of cognitive and evaluative associations between arguments and opinions is needed for a reliable use of models to predict what would come out of facilitating exchange between two specific users. In this regard, computational approaches to the linguistic analysis of texts such as semantic frame extraction offer productive opportunities for empirically modelling opinion dynamics. Extraction of causation frames allows one to disentangle cause-effect relations between semantic units, which provides a productive step towards mapping and measuring structures of cognitive associations. These opportunities are to be explored by future work.
Conclusion
Ongoing transitions from a print-based media ecology to on-line news and discussion platforms have put traditional forms of news production and consumption at stake. Many challenges related to how information is currently produced and consumed come to a head in news website comment sections, which harbour the potential of providing new insights into how cultural conflicts emerge and evolve. On the basis of an observatory for analyzing climate change-related comments from TheGuardian.com, this article has critically examined possibilities and limitations of the machine-assisted exploration and possible facilitation of on-line opinion dynamics and debates.
Beyond technical and modelling pathways, this examination brings into view broader methodological and epistemological aspects of the use of digital methods to capture and study the flow of on-line information and opinions. Notably, the proposed approaches lift questions of computational analysis and interpretation that can be tied to an overarching tension between `distant' and `close reading' BIBREF40. In other words, monitoring on-line opinion dynamics means embracing the challenges and associated trade-offs that come with investigating large quantities of information through computational, text-analytical means, but doing this in such a way that nuance and meaning are not lost in the process.
Establishing productive cross-overs between the level of opinions mined at scale (for instance through the lens of causation frames) and the detailed, closer looks at specific conversations, interactions and contexts depends on a series of preliminaries. One of these is the continued availability of high-quality, accessible data. As the current on-line media ecology is recovering from recent privacy-related scandals (e.g. Cambridge Analytica), such data for obvious reasons is not always easy to come by. In the same legal and ethical vein, reproducibility and transparency of models is crucial to the further development of analytical tools and methods. As the experiments discussed in this paper have revealed, a key factor in this undertaking are human faculties of interpretation. Just like the encoding schemes introduced by Axelrod and others before the wide-spread use of computational methods, present-day pipelines and tools foreground the role of human agents as the primary source of meaning attribution.
<This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 732942 (Opinion Dynamics and Cultural Conflict in European Spaces – www.Odycceus.eu).> | Unanswerable |
312417675b3dc431eb7e7b16a917b7fed98d4376 | 312417675b3dc431eb7e7b16a917b7fed98d4376_0 | Q: What are the causal mapping methods employed?
Text: Introduction ::: Background
Over the past two decades, the rise of social media and the digitization of news and discussion platforms have radically transformed how individuals and groups create, process and share news and information. As Alan Rusbridger, former-editor-in-chief of the newspaper The Guardian has it, these technologically-driven shifts in the ways people communicate, organize themselves and express their beliefs and opinions, have
empower[ed] those that were never heard, creating a a new form of politics and turning traditional news corporations inside out. It is impossible to think of Donald Trump; of Brexit; of Bernie Sanders; of Podemos; of the growth of the far right in Europe; of the spasms of hope and violent despair in the Middle East and North Africa without thinking also of the total inversion of how news is created, shared and distributed. Much of it is liberating and and inspiring. Some of it is ugly and dark. And something - the centuries-old craft of journalism - is in danger of being lost BIBREF0.
Rusbridger's observation that the present media-ecology puts traditional notions of politics, journalism, trust and truth at stake is a widely shared one BIBREF1, BIBREF2, BIBREF3. As such, it has sparked interdisciplinary investigations, diagnoses and ideas for remedies across the economical, socio-political, and technological spectrum, challenging our existing assumptions and epistemologies BIBREF4, BIBREF5. Among these lines of inquiry, particular strands of research from the computational social sciences are addressing pressing questions of how emerging technologies and digital methods might be operationalized to regain a grip on the dynamics that govern the flow of on-line news and its associated multitudes of voices, opinions and conflicts. Could the information circulating on on-line (social) news platforms for instance be mined to better understand and analyze the problems facing our contemporary society? Might such data mining and analysis help us to monitor the growing number of social conflicts and crises due to cultural differences and diverging world-views? And finally, would such an approach potentially facilitate early detection of conflicts and even ways to resolve them before they turn violent?
Answering these questions requires further advances in the study of cultural conflict based on digital media data. This includes the development of fine-grained representations of cultural conflict based on theoretically-informed text analysis, the integration of game-theoretical approaches to models of polarization and alignment, as well as the construction of accessible tools and media-monitoring observatories: platforms that foster insight into the complexities of social behaviour and opinion dynamics through automated computational analyses of (social) media data. Through an interdisciplinary approach, the present article aims to make both a practical and theoretical contribution to these aspects of the study of opinion dynamics and conflict in new media environments.
Introduction ::: Objective
The objective of the present article is to critically examine possibilities and limitations of machine-guided exploration and potential facilitation of on-line opinion dynamics on the basis of an experimental data analytics pipeline or observatory for mining and analyzing climate change-related user comments from the news website of The Guardian (TheGuardian.com). Combining insights from the social and political sciences with computational methods for the linguistic analysis of texts, this observatory provides a series of spatial (network) representations of the opinion landscapes on climate change on the basis of causation frames expressed in news website comments. This allows for the exploration of opinion spaces at different levels of detail and aggregation.
Technical and theoretical questions related to the proposed method and infrastructure for the exploration and facilitation of debates will be discussed in three sections. The first section concerns notions of how to define what constitutes a belief or opinion and how these can be mined from texts. To this end, an approach based on the automated extraction of semantic frames expressing causation is proposed. The observatory thus builds on the theoretical premise that expressions of causation such as `global warming causes rises in sea levels' can be revelatory for a person or group's underlying belief systems. Through a further technical description of the observatory's data-analytical components, section two of the paper deals with matters of spatially modelling the output of the semantic frame extractor and how this might be achieved without sacrificing nuances of meaning. The final section of the paper, then, discusses how insights gained from technologically observing opinion dynamics can inform conceptual modelling efforts and approaches to on-line opinion facilitation. As such, the paper brings into view and critically evaluates the fundamental conceptual leap from machine-guided observation to debate facilitation and intervention.
Through the case examples from The Guardian's website and the theoretical discussions explored in these sections, the paper intends to make a twofold contribution to the fields of media studies, opinion dynamics and computational social science. Firstly, the paper introduces and chains together a number of data analytics components for social media monitoring (and facilitation) that were developed in the context of the <project name anonymized for review> infrastructure project. The <project name anonymized for review> infrastructure makes the components discussed in this paper available as open web services in order to foster reproducibility and further experimentation and development <infrastructure reference URL anonymized for review>. Secondly, and supplementing these technological and methodological gains, the paper addresses a number of theoretical, epistemological and ethical questions that are raised by experimental approaches to opinion exploration and facilitation. This notably includes methodological questions on the preservation of meaning through text and data mining, as well as the role of human interpretation, responsibility and incentivisation in observing and potentially facilitating opinion dynamics.
Introduction ::: Data: the communicative setting of TheGuardian.com
In order to study on-line opinion dynamics and build the corresponding climate change opinion observatory discussed in this paper, a corpus of climate-change related news articles and news website comments was analyzed. Concretely, articles from the ‘climate change’ subsection from the news website of The Guardian dated from 2009 up to April 2019 were processed, along with up to 200 comments and associated metadata for articles where commenting was enabled at the time of publication. The choice for studying opinion dynamics using data from The Guardian is motivated by this news website's prominent position in the media landscape as well as its communicative setting, which is geared towards user engagement. Through this interaction with readers, the news platform embodies many of the recent shifts that characterize our present-day media ecology.
TheGuardian.com is generally acknowledged to be one of the UK's leading online newspapers, with 8,2 million unique visitors per month as of May 2013 BIBREF6. The website consists of a core news site, as well as a range of subsections that allow for further classification and navigation of articles. Articles related to climate change can for instance be accessed by navigating through the `News' section, over the subsection `environment', to the subsubsection `climate change' BIBREF7. All articles on the website can be read free of charge, as The Guardian relies on a business model that combines revenues from advertising, voluntary donations and paid subscriptions.
Apart from offering high-quality, independent journalism on a range of topics, a distinguishing characteristic of The Guardian is its penchant for reader involvement and engagement. Adopting to the changing media landscape and appropriating business models that fit the transition from print to on-line news media, the Guardian has transformed itself into a platform that enables forms of citizen journalism, blogging, and welcomes readers comments on news articles BIBREF0. In order for a reader to comment on articles, it is required that a user account is made, which provides a user with a unique user name and a user profile page with a stable URL. According to the website's help pages, providing users with an identity that is consistently recognized by the community fosters proper on-line community behaviour BIBREF8. Registered users can post comments on content that is open to commenting, and these comments are moderated by a dedicated moderation team according to The Guardian's community standards and participation guidelines BIBREF9. In support of digital methods and innovative approaches to journalism and data mining, The Guardian has launched an open API (application programming interface) through which developers can access different types of content BIBREF10. It should be noted that at the moment of writing this article, readers' comments are not accessible through this API. For the scientific and educational purposes of this paper, comments were thus consulted using a dedicated scraper.
Taking into account this community and technologically-driven orientation, the communicative setting of The Guardian from which opinions are to be mined and the underlying belief system revealed, is defined by articles, participating commenters and comment spheres (that is, the actual comments aggregated by user, individual article or collection of articles) (see Figure FIGREF4).
In this setting, articles (and previous comments on those articles) can be commented on by participating commenters, each of which bring to the debate his or her own opinions or belief system. What this belief system might consists of can be inferred on a number of levels, with varying degrees of precision. On the most general level, a generic description of the profile of the average reader of The Guardian can be informative. Such profiles have been compiled by market researchers with the purpose of informing advertisers about the demographic that might be reached through this news website (and other products carrying The Guardian's brand). As of the writing of this article, the audience The Guardian is presented to advertisers as a `progressive' audience:
Living in a world of unprecedented societal change, with the public narratives around politics, gender, body image, sexuality and diet all being challenged. The Guardian is committed to reflecting the progressive agenda, and reaching the crowd that uphold those values. It’s helpful that we reach over half of progressives in the UK BIBREF11.
A second, equally high-level indicator of the beliefs that might be present on the platform, are the links through which articles on climate change can be accessed. An article on climate change might for instance be consulted through the environment section of the news website, but also through the business section. Assuming that business interests might potentially be at odds with environmental concerns, it could be hypothesized that the particular comment sphere for that article consists of at least two potentially clashing frames of mind or belief systems.
However, as will be expanded upon further in this article, truly capturing opinion dynamics requires a more systemic and fine-grained approach. The present article therefore proposes a method for harvesting opinions from the actual comment texts. The presupposition is thereby that comment spheres are marked by a diversity of potentially related opinions and beliefs. Opinions might for instance be connected through the reply structure that marks the comment section of an article, but this connection might also manifest itself on a semantic level (that is, the level of meaning or the actual contents of the comments). To capture this multidimensional, interconnected nature of the comment spheres, it is proposed to represent comment spheres as networks, where the nodes represent opinions and beliefs, and edges the relationships between these beliefs (see the spatial representation of beliefs infra). The use of precision language tools to extract such beliefs and their mutual relationships, as will be explored in the following sections, can open up new pathways of model validation and creation.
Mining opinions and beliefs from texts
In traditional experimental settings, survey techniques and associated statistical models provide researchers with established methods to gauge and analyze the opinions of a population. When studying opinion landscapes through on-line social media, however, harvesting beliefs from big textual data such as news website comments and developing or appropriating models for their analysis is a non-trivial task BIBREF12, BIBREF13, BIBREF14.
In the present context, two challenges related to data-gathering and text mining need to be addressed: (1) defining what constitutes an expression of an opinion or belief, and (2) associating this definition with a pattern that might be extracted from texts. Recent scholarship in the fields of natural language processing (NLP) and argumentation mining has yielded a range of instruments and methods for the (automatic) identification of argumentative claims in texts BIBREF15, BIBREF16. Adding to these instruments and methods, the present article proposes an approach in which belief systems or opinions on climate change are accessed through expressions of causation.
Mining opinions and beliefs from texts ::: Causal mapping methods and the climate change debate
The climate change debate is often characterized by expressions of causation, that is, expressions linking a certain cause with a certain effect. Cultural or societal clashes on climate change might for instance concern diverging assessments of whether global warming is man-made or not BIBREF17. Based on such examples, it can be stated that expressions of causation are closely associated with opinions or beliefs, and that as such, these expressions can be considered a valuable indicator for the range and diversity of the opinions and beliefs that constitute the climate change debate. The observatory under discussion therefore focuses on the extraction and analysis of linguistic patterns called causation frames. As will be further demonstrated in this section, the benefit of this causation-based approach is that it offers a systemic approach to opinion dynamics that comprises different layers of meaning, notably the cognitive or social meaningfulness of patterns on account of their being expressions of causation, as well as further lexical and semantic information that might be used for analysis and comparison.
The study of expressions of causation as a method for accessing and assessing belief systems and opinions has been formalized and streamlined since the 1970s. Pioneered by political scientist Robert Axelrod and others, this causal mapping method (also referred to as `cognitive mapping') was introduced as a means of reconstructing and evaluating administrative and political decision-making processes, based on the principle that
the notion of causation is vital to the process of evaluating alternatives. Regardless of philosophical difficulties involved in the meaning of causation, people do evaluate complex policy alternatives in terms of the consequences a particular choice would cause, and ultimately of what the sum of these effects would be. Indeed, such causal analysis is built into our language, and it would be very difficult for us to think completely in other terms, even if we tried BIBREF18.
Axelrod's causal mapping method comprises a set of conventions to graphically represent networks of causes and effects (the nodes in a network) as well as the qualitative aspects of this relation (the network’s directed edges, notably assertions of whether the causal linkage is positive or negative). These causes and effects are to be extracted from relevant sources by means of a series of heuristics and an encoding scheme (it should be noted that for this task Axelrod had human readers in mind). The graphs resulting from these efforts provide a structural overview of the relations among causal assertions (and thus beliefs):
The basic elements of the proposed system are quite simple. The concepts a person uses are represented as points, and the causal links between these concepts are represented as arrows between these points. This gives a pictorial representation of the causal assertions of a person as a graph of points and arrows. This kind of representation of assertions as a graph will be called a cognitive map. The policy alternatives, all of the various causes and effects, the goals, and the ultimate utility of the decision maker can all be thought of as concept variables, and represented as points in the cognitive map. The real power of this approach appears when a cognitive map is pictured in graph form; it is then relatively easy to see how each of the concepts and causal relationships relate to each other, and to see the overall structure of the whole set of portrayed assertions BIBREF18.
In order to construct these cognitive maps based on textual information, Margaret Tucker Wrightson provides a set of reading and coding rules for extracting cause concepts, linkages (relations) and effect concepts from expressions in the English language. The assertion `Our present topic is the militarism of Germany, which is maintaining a state of tension in the Baltic Area' might for instance be encoded as follows: `the militarism of Germany' (cause concept), /+/ (a positive relationship), `maintaining a state of tension in the Baltic area' (effect concept) BIBREF19. Emphasizing the role of human interpretation, it is acknowledged that no strict set of rules can capture the entire spectrum of causal assertions:
The fact that the English language is as varied as those who use it makes the coder's task complex and difficult. No set of rules will completely solve the problems he or she might encounter. These rules, however, provide the coder with guidelines which, if conscientiously followed, will result in outcomes meeting social scientific standards of comparative validity and reliability BIBREF19.
To facilitate the task of encoders, the causal mapping method has gone through various iterations since its original inception, all the while preserving its original premises. Recent software packages have for instance been devised to support the data encoding and drawing process BIBREF20. As such, causal or cognitive mapping has become an established opinion and decision mining method within political science, business and management, and other domains. It has notably proven to be a valuable method for the study of recent societal and cultural conflicts. Thomas Homer-Dixon et al. for instance rely on cognitive-affective maps created from survey data to analyze interpretations of the housing crisis in Germany, Israeli attitudes toward the Western Wall, and moderate versus skeptical positions on climate change BIBREF21. Similarly, Duncan Shaw et al. venture to answer the question of `Why did Brexit happen?' by building causal maps of nine televised debates that were broadcast during the four weeks leading up to the Brexit referendum BIBREF22.
In order to appropriate the method of causal mapping to the study of on-line opinion dynamics, it needs to expanded from applications at the scale of human readers and relatively small corpora of archival documents and survey answers, to the realm of `big' textual data and larger quantities of information. This attuning of cognitive mapping methods to the large-scale processing of texts required for media monitoring necessarily involves a degree of automation, as will be explored in the next section.
Mining opinions and beliefs from texts ::: Automated causation tracking with the Penelope semantic frame extractor
As outlined in the previous section, causal mapping is based on the extraction of so-called cause concepts, (causal) relations, and effect concepts from texts. The complexity of each of these these concepts can range from the relatively simple (as illustrated by the easily-identifiable cause and effect relation in the example of `German militarism' cited earlier), to more complex assertions such as `The development of international cooperation in all fields across the ideological frontiers will gradually remove the hostility and fear that poison international relations', which contains two effect concepts (viz. `the hostility that poisons international relations' and `the fear that poisons international relations'). As such, this statement would have to be encoded as a double relationship BIBREF19.
The coding guidelines in BIBREF19 further reflect that extracting cause and effect concepts from texts is an operation that works on both the syntactical and semantic levels of assertions. This can be illustrated by means of the guidelines for analyzing the aforementioned causal assertion on German militarism:
1. The first step is the realization of the relationship. Does a subject affect an object? 2. Having recognized that it does, the isolation of the cause and effects concepts is the second step. As the sentence structure indicates, "the militarism of Germany" is the causal concept, because it is the initiator of the action, while the direct object clause, "a state of tension in the Baltic area," constitutes that which is somehow influenced, the effect concept BIBREF19.
In the field of computational linguistics, from which the present paper borrows part of its methods, this procedure for extracting information related to causal assertions from texts can be considered an instance of an operation called semantic frame extraction BIBREF23. A semantic frame captures a coherent part of the meaning of a sentence in a structured way. As documented in the FrameNet project BIBREF24, the Causation frame is defined as follows:
A Cause causes an Effect. Alternatively, an Actor, a participant of a (implicit) Cause, may stand in for the Cause. The entity Affected by the Causation may stand in for the overall Effect situation or event BIBREF25.
In a linguistic utterance such as a statement in a news website comment, the Causation frame can be evoked by a series of lexical units, such as `cause', `bring on', etc. In the example `If such a small earthquake CAUSES problems, just imagine a big one!', the Causation frame is triggered by the verb `causes', which therefore is called the frame evoking element. The Cause slot is filled by `a small earthquake', the Effect slot by `problems' BIBREF25.
In order to automatically mine cause and effects concepts from the corpus of comments on The Guardian, the present paper uses the Penelope semantic frame extractor: a tool that exploits the fact that semantic frames can be expressed as form-meaning mappings called constructions. Notably, frames were extracted from Guardian comments by focusing on the following lexical units (verbs, prepositions and conjunctions), listed in FrameNet as frame evoking elements of the Causation frame: Cause.v, Due to.prep, Because.c, Because of.prep, Give rise to.v, Lead to.v or Result in.v.
As illustrated by the following examples, the strings output by the semantic frame extractor adhere closely to the original utterance, preserving all of the the comments' causation frames real-world noisiness:
The output of the semantic frame extractor as such is used as the input for the ensuing pipeline components in the climate change opinion observatory. The aim of a further analysis of these frames is to find patterns in the beliefs and opinions they express. As will be discussed in the following section, which focuses on applications and cases, maintaining semantic nuances in this further analytic process foregrounds the role of models and aggregation levels.
Analyses and applications
Based on the presupposition that relations between causation frames reveal beliefs, the output of the semantic frame extractor creates various opportunities for exploring opinion landscapes and empirically validating conceptual models for opinion dynamics.
In general, any alignment of conceptual models and real-world data is an exercise in compromising, as the idealized, abstract nature of models is likely to be at odds with the messiness of the actual data. Finding such a compromise might for instance involve a reduction of the simplicity or elegance of the model, or, on the other hand, an increased aggregation (and thus reduced granularity) of the data.
Addressing this challenge, the current section reflects on questions of data modelling, aggregation and meaning by exploring, through case examples, different spatial representations of opinion landscapes mined from the TheGuardian.com's comment sphere. These spatial renditions will be understood as network visualizations in which nodes represent argumentative statements (beliefs) and edges the degree of similarity between these statements. On the most general level, then, such a representation can consists of an overview of all the causes expressed in the corpus of climate change-related Guardian comments. This type of visualization provides a birds-eye view of the entire opinion landscape as mined from the comment texts. In turn, such a general overview might elicit more fine-grained, micro-level investigations, in which a particular cause is singled out and its more specific associated effects are mapped. These macro and micro level overviews come with their own proper potential for theory building and evaluation, as well as distinct requirements for the depth or detail of meaning that needs to be represented. To get the most general sense of an opinion landscape one might for instance be more tolerant of abstract renditions of beliefs (e.g. by reducing statements to their most frequently used terms), but for more fine-grained analysis one requires more context and nuance (e.g. adhering as closely as possible to the original comment).
Analyses and applications ::: Aggregation
As follows from the above, one of the most fundamental questions when building automated tools to observe opinion dynamics that potentially aim at advising means of debate facilitation concerns the level of meaning aggregation. A clear argumentative or causal association between, for instance, climate change and catastrophic events such as floods or hurricanes may become detectable by automatic causal frame tracking at the scale of large collections of articles where this association might appear statistically more often, but detection comes with great challenges when the aim is to classify certain sets of only a few statements in more free expression environments such as comment spheres.
In other words, the problem of meaning aggregation is closely related to issues of scale and aggregation over utterances. The more fine-grained the semantic resolution is, that is, the more specific the cause or effect is that one is interested in, the less probable it is to observe the same statement twice. Moreover, with every independent variable (such as time, different commenters or user groups, etc.), less data on which fine-grained opinion statements are to be detected is available. In the present case of parsed comments from TheGuardian.com, providing insights into the belief system of individual commenters, even if all their statements are aggregated over time, relies on a relatively small set of argumentative statements. This relative sparseness is in part due to the fact that the scope of the semantic frame extractor is confined to the frame evoking elements listed earlier, thus omitting more implicit assertions of causation (i.e. expressions of causation that can only be derived from context and from reading between the lines).
Similarly, as will be explored in the ensuing paragraphs, matters of scale and aggregation determine the types of further linguistic analyses that can be performed on the output of the frame extractor. Within the field of computational linguistics, various techniques have been developed to represent the meaning of words as vectors that capture the contexts in which these words are typically used. Such analyses might reveal patterns of statistical significance, but it is also likely that in creating novel, numerical representations of the original utterances, the semantic structure of argumentatively linked beliefs is lost.
In sum, developing opinion observatories and (potential) debate facilitators entails finding a trade-off, or, in fact, a middle way between macro- and micro-level analyses. On the one hand, one needs to leverage automated analysis methods at the scale of larger collections to maximum advantage. But one also needs to integrate opportunities to interactively zoom into specific aspects of interest and provide more fine-grained information at these levels down to the actual statements. This interplay between macro- and micro-level analyses is explored in the case studies below.
Analyses and applications ::: Spatial renditions of TheGuardian.com's opinion landscape
The main purpose of the observatory under discussion is to provide insight into the belief structures that characterize the opinion landscape on climate change. For reasons outlined above, this raises questions of how to represent opinions and, correspondingly, determining which representation is most suited as the atomic unit of comparison between opinions. In general terms, the desired outcome of further processing of the output of the semantic frame extractor is a network representation in which similar cause or effect strings are displayed in close proximity to one another. A high-level description of the pipeline under discussion thus goes as follows. In a first step, it can be decided whether one wants to map cause statements or effect statements. Next, the selected statements are grouped per commenter (i.e. a list is made of all cause statements or effect statements per commenter). These statements are filtered in order to retain only nouns, adjectives and verbs (thereby also omitting frequently occurring verbs such as ‘to be’). The remaining words are then lemmatized, that is, reduced to their dictionary forms. This output is finally translated into a network representation, whereby nodes represent (aggregated) statements, and edges express the semantic relatedness between statements (based on a set overlap whereby the number of shared lemmata are counted).
As illustrated by two spatial renditions that were created using this approach and visualized using the network analysis tool Gephi BIBREF26, the labels assigned to these nodes (lemmata, full statements, or other) can be appropriated to the scope of the analysis.
Analyses and applications ::: Spatial renditions of TheGuardian.com's opinion landscape ::: A macro-level overview: causes addressed in the climate change debate
Suppose one wants to get a first idea about the scope and diversity of an opinion landscape, without any preconceived notions of this landscape's structure or composition. One way of doing this would be to map all of the causes that are mentioned in comments related to articles on climate change, that is, creating an overview of all the causes that have been retrieved by the frame extractor in a single representation. Such a representation would not immediately provide the granularity to state what the beliefs or opinions in the debates actually are, but rather, it might inspire a sense of what those opinions might be about, thus pointing towards potentially interesting phenomena that might warrant closer examination.
Figure FIGREF10, a high-level overview of the opinion landscape, reveals a number of areas to which opinions and beliefs might pertain. The top-left clusters in the diagram for instance reveal opinions about the role of people and countries, whereas on the right-hand side, we find a complementary cluster that might point to beliefs concerning the influence of high or increased CO2-emissions. In between, there is a cluster on power and energy sources, reflecting the energy debate's association to both issues of human responsibility and CO2 emissions. As such, the overview can already inspire, potentially at best, some very general hypotheses about the types of opinions that figure in the climate change debate.
Analyses and applications ::: Spatial renditions of TheGuardian.com's opinion landscape ::: Micro-level investigations: opinions on nuclear power and global warming
Based on the range of topics on which beliefs are expressed, a micro-level analysis can be conducted to reveal what those beliefs are and, for instance, whether they align or contradict each other. This can be achieved by singling out a cause of interest, and mapping out its associated effects.
As revealed by the global overview of the climate change opinion landscape, a portion of the debate concerns power and energy sources. One topic with a particularly interesting role in this debate is nuclear power. Figure FIGREF12 illustrates how a more detailed representation of opinions on this matter can be created by spatially representing all of the effects associated with causes containing the expression `nuclear power'. Again, similar beliefs (in terms of words used in the effects) are positioned closer to each other, thus facilitating the detection of clusters. Commenters on The Guardian for instance express concerns about the deaths or extinction that might be caused by this energy resource. They also voice opinions on its cleanliness, whether or not it might decrease pollution or be its own source of pollution, and how it reduces CO2-emissions in different countries.
Whereas the detailed opinion landscape on `nuclear power' is relatively limited in terms of the number of mined opinions, other topics might reveal more elaborate belief systems. This is for instance the case for the phenomenon of `global warming'. As shown in Figure FIGREF13, opinions on global warming are clustered around the idea of `increases', notably in terms of evaporation, drought, heat waves, intensity of cyclones and storms, etc. An adjacent cluster is related to `extremes', such as extreme summers and weather events, but also extreme colds.
From opinion observation to debate facilitation
The observatory introduced in the preceding paragraphs provides preliminary insights into the range and scope of the beliefs that figure in climate change debates on TheGuardian.com. The observatory as such takes a distinctly descriptive stance, and aims to satisfy, at least in part, the information needs of researchers, activists, journalists and other stakeholders whose main concern is to document, investigate and understand on-line opinion dynamics. However, in the current information sphere, which is marked by polarization, misinformation and a close entanglement with real-world conflicts, taking a mere descriptive or neutral stance might not serve every stakeholder's needs. Indeed, given the often skewed relations between power and information, questions arise as to how media observations might in turn be translated into (political, social or economic) action. Knowledge about opinion dynamics might for instance inform interventions that remedy polarization or disarm conflict. In other words, the construction of (social) media observatories unavoidably lifts questions about the possibilities, limitations and, especially, implications of the machine-guided and human-incentivized facilitation of on-line discussions and debates.
Addressing these questions, the present paragraph introduces and explores the concept of a debate facilitator, that is, a device that extends the capabilities of the previously discussed observatory to also promote more interesting and constructive discussions. Concretely, we will conceptualize a device that reveals how the personal opinion landscapes of commenters relate to each other (in terms of overlap or lack thereof), and we will discuss what steps might potentially be taken on the basis of such representation to balance the debate. Geared towards possible interventions in the debate, such a device may thus go well beyond the observatory's objectives of making opinion processes and conflicts more transparent, which concomitantly raises a number of serious concerns that need to be acknowledged.
On rather fundamental ground, tools that steer debates in one way or another may easily become manipulative and dangerous instruments in the hands of certain interest groups. Various aspects of our daily lives are for instance already implicitly guided by recommender systems, the purpose and impact of which can be rather opaque. For this reason, research efforts across disciplines are directed at scrutinizing and rendering such systems more transparent BIBREF28. Such scrutiny is particularly pressing in the context of interventions on on-line communication platforms, which have already been argued to enforce affective communication styles that feed rather than resolve conflict. The objectives behind any facilitation device should therefore be made maximally transparent and potential biases should be fully acknowledged at every level, from data ingest to the dissemination of results BIBREF29. More concretely, the endeavour of constructing opinion observatories and facilitators foregrounds matters of `openness' of data and tools, security, ensuring data quality and representative sampling, accounting for evolving data legislation and policy, building communities and trust, and envisioning beneficial implications. By documenting the development process for a potential facilitation device, the present paper aims to contribute to these on-going investigations and debates. Furthermore, every effort has been made to protect the identities of the commenters involved. In the words of media and technology visionary Jaron Lanier, developers and computational social scientists entering this space should remain fundamentally aware of the fact that `digital information is really just people in disguise' BIBREF30.
With these reservations in mind, the proposed approach can be situated among ongoing efforts that lead from debate observation to facilitation. One such pathway, for instance, involves the construction of filters to detect hate speech, misinformation and other forms of expression that might render debates toxic BIBREF31, BIBREF32. Combined with community outreach, language-based filtering and detection tools have proven to raise awareness among social media users about the nature and potential implications of their on-line contributions BIBREF33. Similarly, advances can be expected from approaches that aim to extend the scope of analysis beyond descriptions of a present debate situation in order to model how a debate might evolve over time and how intentions of the participants could be included in such an analysis.
Progress in any of these areas hinges on a further integration of real-world data in the modelling process, as well as a further socio-technical and media-theoretical investigation of how activity on social media platforms and technologies correlate to real-world conflicts. The remainder of this section therefore ventures to explore how conceptual argument communication models for polarization and alignment BIBREF34 might be reconciled with real-world data, and how such models might inform debate facilitation efforts.
From opinion observation to debate facilitation ::: Debate facilitation through models of alignment and polarization
As discussed in previous sections, news websites like TheGuardian.com establish a communicative settings in which agents (users, commenters) exchange arguments about different issues or topics. For those seeking to establish a healthy debate, it could thus be of interest to know how different users relate to each other in terms of their beliefs about a certain issue or topic (in this case climate change). Which beliefs are for instance shared by users and which ones are not? In other words, can we map patterns of alignment or polarization among users?
Figure FIGREF15 ventures to demonstrate how representations of opinion landscapes (generated using the methods outlined above) can be enriched with user information to answer such questions. Specifically, the graph represents the beliefs of two among the most active commenters in the corpus. The opinions of each user are marked using a colour coding scheme: red nodes represent the beliefs of the first user, blue nodes represent the beliefs of the second user. Nodes with a green colour represent beliefs that are shared by both users.
Taking into account again the factors of aggregation that were discussed in the previous section, Figure FIGREF15 supports some preliminary observations about the relationship between the two users in terms of their beliefs. Generally, given the fact that the graph concerns the two most active commenters on the website, it can be seen that the rendered opinion landscape is quite extensive. It is also clear that the belief systems of both users are not unrelated, as nodes of all colours can be found distributed throughout the graph. This is especially the case for the right-hand top cluster and right-hand bottom cluster of the graph, where green, red, and blue nodes are mixed. Since both users are discussing on articles on climate change, a degree of affinity between opinions or beliefs is to be expected.
Upon closer examination, a number of disparities between the belief systems of the two commenters can be detected. Considering the left-hand top cluster and center of the graph, it becomes clear that exclusively the red commenter is using a selection of terms related to the economical and socio-political realm (e.g. `people', `american', `nation', `government') and industry (e.g. `fuel', `industry', `car', etc.). The blue commenter, on the other hand, exclusively engages in using a range of terms that could be deemed more technical and scientific in nature (e.g. `feedback', `property', `output', `trend', `variability', etc.). From the graph, it also follows that the blue commenter does not enter into the red commenter's `social' segments of the graph as frequently as the red commenter enters the more scientifically-oriented clusters of the graph (although in the latter cases the red commenter does not use the specific technical terminology of the blue commenter). The cluster where both beliefs mingle the most (and where overlap can be observed), is the top right cluster. This overlap is constituted by very general terms (e.g. `climate', `change', and `science'). In sum, the graph reveals that the commenters' beliefs are positioned most closely to each other on the most general aspects of the debate, whereas there is less relatedness on the social and more technical aspects of the debate. In this regard, the depicted situation seemingly evokes currently on-going debates about the role or responsibilities of the people or individuals versus that of experts when it comes to climate change BIBREF35, BIBREF36, BIBREF37.
What forms of debate facilitation, then, could be based on these observations? And what kind of collective effects can be expected? As follows from the above, beliefs expressed by the two commenters shown here (which are selected based on their active participation rather than actual engagement or dialogue with one another) are to some extent complementary, as the blue commenter, who displays a scientifically-oriented system of beliefs, does not readily engage with the social topics discussed by the red commenter. As such, the overall opinion landscape of the climate change could potentially be enriched with novel perspectives if the blue commenter was invited to engage in a debate about such topics as industry and government. Similarly, one could explore the possibility of providing explanatory tools or additional references on occasions where the debate takes a more technical turn.
However, argument-based models of collective attitude formation BIBREF38, BIBREF34 also tell us to be cautious about such potential interventions. Following the theory underlying these models, different opinion groups prevailing during different periods of a debate will activate different argumentative associations. Facilitating exchange between users with complementary arguments supporting similar opinions may enforce biased argument pools BIBREF39 and lead to increasing polarization at the collective level. In the example considered here the two commenters agree on the general topic, but the analysis suggests that they might have different opinions about the adequate direction of specific climate change action. A more fine–grained automatic detection of cognitive and evaluative associations between arguments and opinions is needed for a reliable use of models to predict what would come out of facilitating exchange between two specific users. In this regard, computational approaches to the linguistic analysis of texts such as semantic frame extraction offer productive opportunities for empirically modelling opinion dynamics. Extraction of causation frames allows one to disentangle cause-effect relations between semantic units, which provides a productive step towards mapping and measuring structures of cognitive associations. These opportunities are to be explored by future work.
Conclusion
Ongoing transitions from a print-based media ecology to on-line news and discussion platforms have put traditional forms of news production and consumption at stake. Many challenges related to how information is currently produced and consumed come to a head in news website comment sections, which harbour the potential of providing new insights into how cultural conflicts emerge and evolve. On the basis of an observatory for analyzing climate change-related comments from TheGuardian.com, this article has critically examined possibilities and limitations of the machine-assisted exploration and possible facilitation of on-line opinion dynamics and debates.
Beyond technical and modelling pathways, this examination brings into view broader methodological and epistemological aspects of the use of digital methods to capture and study the flow of on-line information and opinions. Notably, the proposed approaches lift questions of computational analysis and interpretation that can be tied to an overarching tension between `distant' and `close reading' BIBREF40. In other words, monitoring on-line opinion dynamics means embracing the challenges and associated trade-offs that come with investigating large quantities of information through computational, text-analytical means, but doing this in such a way that nuance and meaning are not lost in the process.
Establishing productive cross-overs between the level of opinions mined at scale (for instance through the lens of causation frames) and the detailed, closer looks at specific conversations, interactions and contexts depends on a series of preliminaries. One of these is the continued availability of high-quality, accessible data. As the current on-line media ecology is recovering from recent privacy-related scandals (e.g. Cambridge Analytica), such data for obvious reasons is not always easy to come by. In the same legal and ethical vein, reproducibility and transparency of models is crucial to the further development of analytical tools and methods. As the experiments discussed in this paper have revealed, a key factor in this undertaking are human faculties of interpretation. Just like the encoding schemes introduced by Axelrod and others before the wide-spread use of computational methods, present-day pipelines and tools foreground the role of human agents as the primary source of meaning attribution.
<This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 732942 (Opinion Dynamics and Cultural Conflict in European Spaces – www.Odycceus.eu).> | Axelrod's causal mapping method |
792d7b579cbf7bfad8fe125b0d66c2059a174cf9 | 792d7b579cbf7bfad8fe125b0d66c2059a174cf9_0 | Q: What is the previous work's model?
Text: Introduction
Hinglish is a linguistic blend of Hindi (very widely spoken language in India) and English (an associate language of urban areas) and is spoken by upwards of 350 million people in India. While the name is based on the Hindi language, it does not refer exclusively to Hindi, but is used in India, with English words blending with Punjabi, Gujarati, Marathi and Hindi. Sometimes, though rarely, Hinglish is used to refer to Hindi written in English script and mixing with English words or phrases. This makes analyzing the language very interesting. Its rampant usage in social media like Twitter, Facebook, Online blogs and reviews has also led to its usage in delivering hate and abuses in similar platforms. We aim to find such content in the social media focusing on the tweets. Hypothetically, if we can classify such tweets, we might be able to detect them and isolate them for further analysis before it reaches public. This will a great application of AI to the social cause and thus is motivating. An example of a simple, non offensive message written in Hinglish could be:
"Why do you waste your time with <redacted content>. Aapna ghar sambhalta nahi(<redacted content>). Chale dusro ko basane..!!"
The second part of the above sentence is written in Hindi while the first part is in English. Second part calls for an action to a person to bring order to his/her home before trying to settle others.
Introduction ::: Modeling challenges
From the modeling perspective there are couple of challenges introduced by the language and the labelled dataset. Generally, Hinglish follows largely fuzzy set of rules which evolves and is dependent upon the users preference. It doesn't have any formal definitions and thus the rules of usage are ambiguous. Thus, when used by different users the text produced may differ. Overall the challenges posed by this problem are:
Geographical variation: Depending upon the geography of origination, the content may be be highly influenced by the underlying region.
Language and phonetics variation: Based on a census in 2001, India has 122 major languages and 1599 other languages. The use of Hindi and English in a code switched setting is highly influenced by these language.
No grammar rules: Hinglish has no fixed set of grammar rules. The rules are inspired from both Hindi and English and when mixed with slur and slang produce large variation.
Spelling variation: There is no agreement on the spellings of the words which are mixed with English. For example to express love, a code mixed spelling, specially when used social platforms might be pyaar, pyar or pyr.
Dataset: Based on some earlier work, only available labelled dataset had 3189 rows of text messages of average length of 116 words and with a range of 1, 1295. Prior work addresses this concern by using Transfer Learning on an architecture learnt on about 14,500 messages with an accuracy of 83.90. We addressed this concern using data augmentation techniques applied on text data.
Related Work ::: Transfer learning based approaches
Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.
The approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work.
Related Work ::: Hybrid models
In another localized setting of Vietnamese language, Nguyen et al. in 2017 proposed a Hybrid multi-channel CNN and LSTM model where they build feature maps for Vietnamese language using CNN to capture shorterm dependencies and LSTM to capture long term dependencies and concatenate both these feature sets to learn a unified set of features on the messages. These concatenated feature vectors are then sent to a few fully connected layers. They achieved an accuracy rate of 87.3% with this architecture.
Dataset and Features
We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:
Dataset and Features ::: Challenges
The obtained data set had many challenges and thus a data preparation task was employed to clean the data and make it ready for the deep learning pipeline. The challenges and processes that were applied are stated below:
Messy text messages: The tweets had urls, punctuations, username mentions, hastags, emoticons, numbers and lots of special characters. These were all cleaned up in a preprocessing cycle to clean the data.
Stop words: Stop words corpus obtained from NLTK was used to eliminate most unproductive words which provide little information about individual tweets.
Transliteration: Followed by above two processes, we translated Hinglish tweets into English words using a two phase process
Transliteration: In phase I, we used translation API's provided by Google translation services and exposed via a SDK, to transliteration the Hinglish messages to English messages.
Translation: After transliteration, words that were specific to Hinglish were translated to English using an Hinglish-English dictionary. By doing this we converted the Hinglish message to and assortment of isolated words being presented in the message in a sequence that can also be represented using word to vector representation.
Data augmentation: Given the data set was very small with a high degree of imbalance in the labelled messages for three different classes, we employed a data augmentation technique to boost the learning of the deep network. Following techniques from the paper by Jason et al. was utilized in this setting that really helped during the training phase.Thsi techniques wasnt used in previous studies. The techniques were:
Synonym Replacement (SR):Randomly choose n words from the sentence that are not stop words. Replace each of these words with one of its synonyms chosen at random.
Random Insertion (RI):Find a random synonym of a random word in the sentence that is not a stop word. Insert that synonym into a random position in the sentence. Do this n times.
Random Swap (RS):Randomly choose two words in the sentence and swap their positions. Do this n times.
Random Deletion (RD):For each word in the sentence, randomly remove it with probability p.
Word Representation: We used word embedding representations by Glove for creating word embedding layers and to obtain the word sequence vector representations of the processed tweets. The pre-trained embedding dimension were one of the hyperparamaters for model. Further more, we introduced another bit flag hyperparameter that determined if to freeze these learnt embedding.
Train-test split: The labelled dataset that was available for this task was very limited in number of examples and thus as noted above few data augmentation techniques were applied to boost the learning of the network. Before applying augmentation, a train-test split of 78%-22% was done from the original, cleansed data set. Thus, 700 tweets/messages were held out for testing. All model evaluation were done in on the test set that got generated by this process. The results presented in this report are based on the performance of the model on the test set. The training set of 2489 messages were however sent to an offline pipeline for augmenting the data. The resulting training dataset was thus 7934 messages. the final distribution of messages for training and test was thus below:
Model Architecture
We tested the performance of various model architectures by running our experiment over 100 times on a CPU based compute which later as migrated to GPU based compute to overcome the slow learning progress. Our universal metric for minimizing was the validation loss and we employed various operational techniques for optimizing on the learning process. These processes and its implementation details will be discussed later but they were learning rate decay, early stopping, model checkpointing and reducing learning rate on plateau.
Model Architecture ::: Loss function
For the loss function we chose categorical cross entropy loss in finding the most optimal weights/parameters of the model. Formally this loss function for the model is defined as below:
The double sum is over the number of observations and the categories respectively. While the model probability is the probability that the observation i belongs to category c.
Model Architecture ::: Models
Among the model architectures we experimented with and without data augmentation were:
Fully Connected dense networks: Model hyperparameters were inspired from the previous work done by Vo et al and Mathur et al. This was also used as a baseline model but we did not get appreciable performance on such architecture due to FC networks not being able to capture local and long term dependencies.
Convolution based architectures: Architecture and hyperparameter choices were chosen from the past study Deon on the subject. We were able to boost the performance as compared to only FC based network but we noticed better performance from architectures that are suitable to sequences such as text messages or any timeseries data.
Sequence models: We used SimpleRNN, LSTM, GRU, Bidirectional LSTM model architecture to capture long term dependencies of the messages in determining the class the message or the tweet belonged to.
Based on all the experiments we conducted below model had best performance related to metrics - Recall rate, F1 score and Overall accuracy.
Model Architecture ::: Hyper parameters
Choice of model parameters were in the above models were inspired from previous work done but then were tuned to the best performance of the Test dataset. Following parameters were considered for tuning.
Learning rate: Based on grid search the best performance was achieved when learning rate was set to 0.01. This value was arrived by a grid search on lr parameter.
Number of Bidirectional LSTM units: A set of 32, 64, 128 hidden activation units were considered for tuning the model. 128 was a choice made by Vo et al in modeling for Vietnamese language but with our experiments and with a small dataset to avoid overfitting to train dataset, a smaller unit sizes were considered.
Embedding dimension: 50, 100 and 200 dimension word representation from Glove word embedding were considered and the best results were obtained with 100d representation, consistent with choices made in the previous work.
Transfer learning on Embedding; Another bit flag for training the embedding on the train data or freezing the embedding from Glove was used. It was determined that set of pre-trained weights from Glove was best when it was fine tuned with Hinglish data. It provides evidence that a separate word or sentence level embedding when learnt for Hinglish text analysis will be very useful.
Number of dense FC layers.
Maximum length of the sequence to be considered: The max length of tweets/message in the dataset was 1265 while average was 116. We determined that choosing 200 resulted in the best performance.
Results
During our experimentation, it was evident that this is a hard problem especially detecting the hate speech, text in a code- mixed language. The best recall rate of 77 % for hate speech was obtained by a Bidirectional LSTM with 32 units with a recurrent drop out rate of 0.2. Precision wise GRU type of RNN sequence model faired better than other kinds for hate speech detection. On the other hand for detecting offensive and non offensive tweets, fairly satisfactory results were obtained. For offensive tweets, 92 % precision was and recall rate of 88% was obtained with GRU versus BiLSTM based models. Comparatively, Recall of 85 % and precision of 76 % was obtained by again GRU and BiLSTM based models as shown and marked in the results.
Conclusion and Future work
The results of the experiments are encouraging on detective offensive vs non offensive tweets and messages written in Hinglish in social media. The utilization of data augmentation technique in this classification task was one of the vital contributions which led us to surpass results obtained by previous state of the art Hybrid CNN-LSTM based models. However, the results of the model for predicting hateful tweets on the contrary brings forth some shortcomings of the model. The biggest shortcoming on the model based on error analysis indicates less than generalized examples presented by the dataset. We also note that the embedding learnt from the Hinglish data set may be lacking and require extensive training to have competent word representations of Hinglish text. Given this learning's, we identify that creating word embeddings on much larger Hinglish corpora may have significant results. We also hypothesize that considering alternate methods than translation and transliteration may prove beneficial.
References
[1] Mathur, Puneet and Sawhney, Ramit and Ayyar, Meghna and Shah, Rajiv, Did you offend me? classification of offensive tweets in hinglish language, Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)
[2] Mathur, Puneet and Shah, Rajiv and Sawhney, Ramit and Mahata, Debanjan Detecting offensive tweets in hindi-english code-switched language Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media
[3] Vo, Quan-Hoang and Nguyen, Huy-Tien and Le, Bac and Nguyen, Minh-Le Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)
[4] Hochreiter, Sepp and Schmidhuber, Jürgen Long short-term memory Neural computation 1997
[5] Sinha, R Mahesh K and Thakur, Anil Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)
[6] Pennington, Jeffrey and Socher, Richard and Manning, Christopher Glove: Global vectors for word representation Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)
[7] Zhang, Lei and Wang, Shuai and Liu, Bing Deep learning for sentiment analysis: A survey Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery
[8] Caruana, Rich and Lawrence, Steve and Giles, C Lee Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping Advances in neural information processing systems
[9] Beale, Mark Hudson and Hagan, Martin T and Demuth, Howard B Neural network toolbox user’s guide The MathWorks Incs
[10] Chollet, François and others Keras: The python deep learning library Astrophysics Source Code Library
[11] Wei, Jason and Zou, Kai EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) | Ternary Trans-CNN |
44a2a8e187f8adbd7d63a51cd2f9d2d324d0c98d | 44a2a8e187f8adbd7d63a51cd2f9d2d324d0c98d_0 | Q: What dataset is used?
Text: Introduction
Hinglish is a linguistic blend of Hindi (very widely spoken language in India) and English (an associate language of urban areas) and is spoken by upwards of 350 million people in India. While the name is based on the Hindi language, it does not refer exclusively to Hindi, but is used in India, with English words blending with Punjabi, Gujarati, Marathi and Hindi. Sometimes, though rarely, Hinglish is used to refer to Hindi written in English script and mixing with English words or phrases. This makes analyzing the language very interesting. Its rampant usage in social media like Twitter, Facebook, Online blogs and reviews has also led to its usage in delivering hate and abuses in similar platforms. We aim to find such content in the social media focusing on the tweets. Hypothetically, if we can classify such tweets, we might be able to detect them and isolate them for further analysis before it reaches public. This will a great application of AI to the social cause and thus is motivating. An example of a simple, non offensive message written in Hinglish could be:
"Why do you waste your time with <redacted content>. Aapna ghar sambhalta nahi(<redacted content>). Chale dusro ko basane..!!"
The second part of the above sentence is written in Hindi while the first part is in English. Second part calls for an action to a person to bring order to his/her home before trying to settle others.
Introduction ::: Modeling challenges
From the modeling perspective there are couple of challenges introduced by the language and the labelled dataset. Generally, Hinglish follows largely fuzzy set of rules which evolves and is dependent upon the users preference. It doesn't have any formal definitions and thus the rules of usage are ambiguous. Thus, when used by different users the text produced may differ. Overall the challenges posed by this problem are:
Geographical variation: Depending upon the geography of origination, the content may be be highly influenced by the underlying region.
Language and phonetics variation: Based on a census in 2001, India has 122 major languages and 1599 other languages. The use of Hindi and English in a code switched setting is highly influenced by these language.
No grammar rules: Hinglish has no fixed set of grammar rules. The rules are inspired from both Hindi and English and when mixed with slur and slang produce large variation.
Spelling variation: There is no agreement on the spellings of the words which are mixed with English. For example to express love, a code mixed spelling, specially when used social platforms might be pyaar, pyar or pyr.
Dataset: Based on some earlier work, only available labelled dataset had 3189 rows of text messages of average length of 116 words and with a range of 1, 1295. Prior work addresses this concern by using Transfer Learning on an architecture learnt on about 14,500 messages with an accuracy of 83.90. We addressed this concern using data augmentation techniques applied on text data.
Related Work ::: Transfer learning based approaches
Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.
The approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work.
Related Work ::: Hybrid models
In another localized setting of Vietnamese language, Nguyen et al. in 2017 proposed a Hybrid multi-channel CNN and LSTM model where they build feature maps for Vietnamese language using CNN to capture shorterm dependencies and LSTM to capture long term dependencies and concatenate both these feature sets to learn a unified set of features on the messages. These concatenated feature vectors are then sent to a few fully connected layers. They achieved an accuracy rate of 87.3% with this architecture.
Dataset and Features
We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:
Dataset and Features ::: Challenges
The obtained data set had many challenges and thus a data preparation task was employed to clean the data and make it ready for the deep learning pipeline. The challenges and processes that were applied are stated below:
Messy text messages: The tweets had urls, punctuations, username mentions, hastags, emoticons, numbers and lots of special characters. These were all cleaned up in a preprocessing cycle to clean the data.
Stop words: Stop words corpus obtained from NLTK was used to eliminate most unproductive words which provide little information about individual tweets.
Transliteration: Followed by above two processes, we translated Hinglish tweets into English words using a two phase process
Transliteration: In phase I, we used translation API's provided by Google translation services and exposed via a SDK, to transliteration the Hinglish messages to English messages.
Translation: After transliteration, words that were specific to Hinglish were translated to English using an Hinglish-English dictionary. By doing this we converted the Hinglish message to and assortment of isolated words being presented in the message in a sequence that can also be represented using word to vector representation.
Data augmentation: Given the data set was very small with a high degree of imbalance in the labelled messages for three different classes, we employed a data augmentation technique to boost the learning of the deep network. Following techniques from the paper by Jason et al. was utilized in this setting that really helped during the training phase.Thsi techniques wasnt used in previous studies. The techniques were:
Synonym Replacement (SR):Randomly choose n words from the sentence that are not stop words. Replace each of these words with one of its synonyms chosen at random.
Random Insertion (RI):Find a random synonym of a random word in the sentence that is not a stop word. Insert that synonym into a random position in the sentence. Do this n times.
Random Swap (RS):Randomly choose two words in the sentence and swap their positions. Do this n times.
Random Deletion (RD):For each word in the sentence, randomly remove it with probability p.
Word Representation: We used word embedding representations by Glove for creating word embedding layers and to obtain the word sequence vector representations of the processed tweets. The pre-trained embedding dimension were one of the hyperparamaters for model. Further more, we introduced another bit flag hyperparameter that determined if to freeze these learnt embedding.
Train-test split: The labelled dataset that was available for this task was very limited in number of examples and thus as noted above few data augmentation techniques were applied to boost the learning of the network. Before applying augmentation, a train-test split of 78%-22% was done from the original, cleansed data set. Thus, 700 tweets/messages were held out for testing. All model evaluation were done in on the test set that got generated by this process. The results presented in this report are based on the performance of the model on the test set. The training set of 2489 messages were however sent to an offline pipeline for augmenting the data. The resulting training dataset was thus 7934 messages. the final distribution of messages for training and test was thus below:
Model Architecture
We tested the performance of various model architectures by running our experiment over 100 times on a CPU based compute which later as migrated to GPU based compute to overcome the slow learning progress. Our universal metric for minimizing was the validation loss and we employed various operational techniques for optimizing on the learning process. These processes and its implementation details will be discussed later but they were learning rate decay, early stopping, model checkpointing and reducing learning rate on plateau.
Model Architecture ::: Loss function
For the loss function we chose categorical cross entropy loss in finding the most optimal weights/parameters of the model. Formally this loss function for the model is defined as below:
The double sum is over the number of observations and the categories respectively. While the model probability is the probability that the observation i belongs to category c.
Model Architecture ::: Models
Among the model architectures we experimented with and without data augmentation were:
Fully Connected dense networks: Model hyperparameters were inspired from the previous work done by Vo et al and Mathur et al. This was also used as a baseline model but we did not get appreciable performance on such architecture due to FC networks not being able to capture local and long term dependencies.
Convolution based architectures: Architecture and hyperparameter choices were chosen from the past study Deon on the subject. We were able to boost the performance as compared to only FC based network but we noticed better performance from architectures that are suitable to sequences such as text messages or any timeseries data.
Sequence models: We used SimpleRNN, LSTM, GRU, Bidirectional LSTM model architecture to capture long term dependencies of the messages in determining the class the message or the tweet belonged to.
Based on all the experiments we conducted below model had best performance related to metrics - Recall rate, F1 score and Overall accuracy.
Model Architecture ::: Hyper parameters
Choice of model parameters were in the above models were inspired from previous work done but then were tuned to the best performance of the Test dataset. Following parameters were considered for tuning.
Learning rate: Based on grid search the best performance was achieved when learning rate was set to 0.01. This value was arrived by a grid search on lr parameter.
Number of Bidirectional LSTM units: A set of 32, 64, 128 hidden activation units were considered for tuning the model. 128 was a choice made by Vo et al in modeling for Vietnamese language but with our experiments and with a small dataset to avoid overfitting to train dataset, a smaller unit sizes were considered.
Embedding dimension: 50, 100 and 200 dimension word representation from Glove word embedding were considered and the best results were obtained with 100d representation, consistent with choices made in the previous work.
Transfer learning on Embedding; Another bit flag for training the embedding on the train data or freezing the embedding from Glove was used. It was determined that set of pre-trained weights from Glove was best when it was fine tuned with Hinglish data. It provides evidence that a separate word or sentence level embedding when learnt for Hinglish text analysis will be very useful.
Number of dense FC layers.
Maximum length of the sequence to be considered: The max length of tweets/message in the dataset was 1265 while average was 116. We determined that choosing 200 resulted in the best performance.
Results
During our experimentation, it was evident that this is a hard problem especially detecting the hate speech, text in a code- mixed language. The best recall rate of 77 % for hate speech was obtained by a Bidirectional LSTM with 32 units with a recurrent drop out rate of 0.2. Precision wise GRU type of RNN sequence model faired better than other kinds for hate speech detection. On the other hand for detecting offensive and non offensive tweets, fairly satisfactory results were obtained. For offensive tweets, 92 % precision was and recall rate of 88% was obtained with GRU versus BiLSTM based models. Comparatively, Recall of 85 % and precision of 76 % was obtained by again GRU and BiLSTM based models as shown and marked in the results.
Conclusion and Future work
The results of the experiments are encouraging on detective offensive vs non offensive tweets and messages written in Hinglish in social media. The utilization of data augmentation technique in this classification task was one of the vital contributions which led us to surpass results obtained by previous state of the art Hybrid CNN-LSTM based models. However, the results of the model for predicting hateful tweets on the contrary brings forth some shortcomings of the model. The biggest shortcoming on the model based on error analysis indicates less than generalized examples presented by the dataset. We also note that the embedding learnt from the Hinglish data set may be lacking and require extensive training to have competent word representations of Hinglish text. Given this learning's, we identify that creating word embeddings on much larger Hinglish corpora may have significant results. We also hypothesize that considering alternate methods than translation and transliteration may prove beneficial.
References
[1] Mathur, Puneet and Sawhney, Ramit and Ayyar, Meghna and Shah, Rajiv, Did you offend me? classification of offensive tweets in hinglish language, Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)
[2] Mathur, Puneet and Shah, Rajiv and Sawhney, Ramit and Mahata, Debanjan Detecting offensive tweets in hindi-english code-switched language Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media
[3] Vo, Quan-Hoang and Nguyen, Huy-Tien and Le, Bac and Nguyen, Minh-Le Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)
[4] Hochreiter, Sepp and Schmidhuber, Jürgen Long short-term memory Neural computation 1997
[5] Sinha, R Mahesh K and Thakur, Anil Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)
[6] Pennington, Jeffrey and Socher, Richard and Manning, Christopher Glove: Global vectors for word representation Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)
[7] Zhang, Lei and Wang, Shuai and Liu, Bing Deep learning for sentiment analysis: A survey Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery
[8] Caruana, Rich and Lawrence, Steve and Giles, C Lee Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping Advances in neural information processing systems
[9] Beale, Mark Hudson and Hagan, Martin T and Demuth, Howard B Neural network toolbox user’s guide The MathWorks Incs
[10] Chollet, François and others Keras: The python deep learning library Astrophysics Source Code Library
[11] Wei, Jason and Zou, Kai EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) | HEOT , A labelled dataset for a corresponding english tweets |
44a2a8e187f8adbd7d63a51cd2f9d2d324d0c98d | 44a2a8e187f8adbd7d63a51cd2f9d2d324d0c98d_1 | Q: What dataset is used?
Text: Introduction
Hinglish is a linguistic blend of Hindi (very widely spoken language in India) and English (an associate language of urban areas) and is spoken by upwards of 350 million people in India. While the name is based on the Hindi language, it does not refer exclusively to Hindi, but is used in India, with English words blending with Punjabi, Gujarati, Marathi and Hindi. Sometimes, though rarely, Hinglish is used to refer to Hindi written in English script and mixing with English words or phrases. This makes analyzing the language very interesting. Its rampant usage in social media like Twitter, Facebook, Online blogs and reviews has also led to its usage in delivering hate and abuses in similar platforms. We aim to find such content in the social media focusing on the tweets. Hypothetically, if we can classify such tweets, we might be able to detect them and isolate them for further analysis before it reaches public. This will a great application of AI to the social cause and thus is motivating. An example of a simple, non offensive message written in Hinglish could be:
"Why do you waste your time with <redacted content>. Aapna ghar sambhalta nahi(<redacted content>). Chale dusro ko basane..!!"
The second part of the above sentence is written in Hindi while the first part is in English. Second part calls for an action to a person to bring order to his/her home before trying to settle others.
Introduction ::: Modeling challenges
From the modeling perspective there are couple of challenges introduced by the language and the labelled dataset. Generally, Hinglish follows largely fuzzy set of rules which evolves and is dependent upon the users preference. It doesn't have any formal definitions and thus the rules of usage are ambiguous. Thus, when used by different users the text produced may differ. Overall the challenges posed by this problem are:
Geographical variation: Depending upon the geography of origination, the content may be be highly influenced by the underlying region.
Language and phonetics variation: Based on a census in 2001, India has 122 major languages and 1599 other languages. The use of Hindi and English in a code switched setting is highly influenced by these language.
No grammar rules: Hinglish has no fixed set of grammar rules. The rules are inspired from both Hindi and English and when mixed with slur and slang produce large variation.
Spelling variation: There is no agreement on the spellings of the words which are mixed with English. For example to express love, a code mixed spelling, specially when used social platforms might be pyaar, pyar or pyr.
Dataset: Based on some earlier work, only available labelled dataset had 3189 rows of text messages of average length of 116 words and with a range of 1, 1295. Prior work addresses this concern by using Transfer Learning on an architecture learnt on about 14,500 messages with an accuracy of 83.90. We addressed this concern using data augmentation techniques applied on text data.
Related Work ::: Transfer learning based approaches
Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.
The approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work.
Related Work ::: Hybrid models
In another localized setting of Vietnamese language, Nguyen et al. in 2017 proposed a Hybrid multi-channel CNN and LSTM model where they build feature maps for Vietnamese language using CNN to capture shorterm dependencies and LSTM to capture long term dependencies and concatenate both these feature sets to learn a unified set of features on the messages. These concatenated feature vectors are then sent to a few fully connected layers. They achieved an accuracy rate of 87.3% with this architecture.
Dataset and Features
We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:
Dataset and Features ::: Challenges
The obtained data set had many challenges and thus a data preparation task was employed to clean the data and make it ready for the deep learning pipeline. The challenges and processes that were applied are stated below:
Messy text messages: The tweets had urls, punctuations, username mentions, hastags, emoticons, numbers and lots of special characters. These were all cleaned up in a preprocessing cycle to clean the data.
Stop words: Stop words corpus obtained from NLTK was used to eliminate most unproductive words which provide little information about individual tweets.
Transliteration: Followed by above two processes, we translated Hinglish tweets into English words using a two phase process
Transliteration: In phase I, we used translation API's provided by Google translation services and exposed via a SDK, to transliteration the Hinglish messages to English messages.
Translation: After transliteration, words that were specific to Hinglish were translated to English using an Hinglish-English dictionary. By doing this we converted the Hinglish message to and assortment of isolated words being presented in the message in a sequence that can also be represented using word to vector representation.
Data augmentation: Given the data set was very small with a high degree of imbalance in the labelled messages for three different classes, we employed a data augmentation technique to boost the learning of the deep network. Following techniques from the paper by Jason et al. was utilized in this setting that really helped during the training phase.Thsi techniques wasnt used in previous studies. The techniques were:
Synonym Replacement (SR):Randomly choose n words from the sentence that are not stop words. Replace each of these words with one of its synonyms chosen at random.
Random Insertion (RI):Find a random synonym of a random word in the sentence that is not a stop word. Insert that synonym into a random position in the sentence. Do this n times.
Random Swap (RS):Randomly choose two words in the sentence and swap their positions. Do this n times.
Random Deletion (RD):For each word in the sentence, randomly remove it with probability p.
Word Representation: We used word embedding representations by Glove for creating word embedding layers and to obtain the word sequence vector representations of the processed tweets. The pre-trained embedding dimension were one of the hyperparamaters for model. Further more, we introduced another bit flag hyperparameter that determined if to freeze these learnt embedding.
Train-test split: The labelled dataset that was available for this task was very limited in number of examples and thus as noted above few data augmentation techniques were applied to boost the learning of the network. Before applying augmentation, a train-test split of 78%-22% was done from the original, cleansed data set. Thus, 700 tweets/messages were held out for testing. All model evaluation were done in on the test set that got generated by this process. The results presented in this report are based on the performance of the model on the test set. The training set of 2489 messages were however sent to an offline pipeline for augmenting the data. The resulting training dataset was thus 7934 messages. the final distribution of messages for training and test was thus below:
Model Architecture
We tested the performance of various model architectures by running our experiment over 100 times on a CPU based compute which later as migrated to GPU based compute to overcome the slow learning progress. Our universal metric for minimizing was the validation loss and we employed various operational techniques for optimizing on the learning process. These processes and its implementation details will be discussed later but they were learning rate decay, early stopping, model checkpointing and reducing learning rate on plateau.
Model Architecture ::: Loss function
For the loss function we chose categorical cross entropy loss in finding the most optimal weights/parameters of the model. Formally this loss function for the model is defined as below:
The double sum is over the number of observations and the categories respectively. While the model probability is the probability that the observation i belongs to category c.
Model Architecture ::: Models
Among the model architectures we experimented with and without data augmentation were:
Fully Connected dense networks: Model hyperparameters were inspired from the previous work done by Vo et al and Mathur et al. This was also used as a baseline model but we did not get appreciable performance on such architecture due to FC networks not being able to capture local and long term dependencies.
Convolution based architectures: Architecture and hyperparameter choices were chosen from the past study Deon on the subject. We were able to boost the performance as compared to only FC based network but we noticed better performance from architectures that are suitable to sequences such as text messages or any timeseries data.
Sequence models: We used SimpleRNN, LSTM, GRU, Bidirectional LSTM model architecture to capture long term dependencies of the messages in determining the class the message or the tweet belonged to.
Based on all the experiments we conducted below model had best performance related to metrics - Recall rate, F1 score and Overall accuracy.
Model Architecture ::: Hyper parameters
Choice of model parameters were in the above models were inspired from previous work done but then were tuned to the best performance of the Test dataset. Following parameters were considered for tuning.
Learning rate: Based on grid search the best performance was achieved when learning rate was set to 0.01. This value was arrived by a grid search on lr parameter.
Number of Bidirectional LSTM units: A set of 32, 64, 128 hidden activation units were considered for tuning the model. 128 was a choice made by Vo et al in modeling for Vietnamese language but with our experiments and with a small dataset to avoid overfitting to train dataset, a smaller unit sizes were considered.
Embedding dimension: 50, 100 and 200 dimension word representation from Glove word embedding were considered and the best results were obtained with 100d representation, consistent with choices made in the previous work.
Transfer learning on Embedding; Another bit flag for training the embedding on the train data or freezing the embedding from Glove was used. It was determined that set of pre-trained weights from Glove was best when it was fine tuned with Hinglish data. It provides evidence that a separate word or sentence level embedding when learnt for Hinglish text analysis will be very useful.
Number of dense FC layers.
Maximum length of the sequence to be considered: The max length of tweets/message in the dataset was 1265 while average was 116. We determined that choosing 200 resulted in the best performance.
Results
During our experimentation, it was evident that this is a hard problem especially detecting the hate speech, text in a code- mixed language. The best recall rate of 77 % for hate speech was obtained by a Bidirectional LSTM with 32 units with a recurrent drop out rate of 0.2. Precision wise GRU type of RNN sequence model faired better than other kinds for hate speech detection. On the other hand for detecting offensive and non offensive tweets, fairly satisfactory results were obtained. For offensive tweets, 92 % precision was and recall rate of 88% was obtained with GRU versus BiLSTM based models. Comparatively, Recall of 85 % and precision of 76 % was obtained by again GRU and BiLSTM based models as shown and marked in the results.
Conclusion and Future work
The results of the experiments are encouraging on detective offensive vs non offensive tweets and messages written in Hinglish in social media. The utilization of data augmentation technique in this classification task was one of the vital contributions which led us to surpass results obtained by previous state of the art Hybrid CNN-LSTM based models. However, the results of the model for predicting hateful tweets on the contrary brings forth some shortcomings of the model. The biggest shortcoming on the model based on error analysis indicates less than generalized examples presented by the dataset. We also note that the embedding learnt from the Hinglish data set may be lacking and require extensive training to have competent word representations of Hinglish text. Given this learning's, we identify that creating word embeddings on much larger Hinglish corpora may have significant results. We also hypothesize that considering alternate methods than translation and transliteration may prove beneficial.
References
[1] Mathur, Puneet and Sawhney, Ramit and Ayyar, Meghna and Shah, Rajiv, Did you offend me? classification of offensive tweets in hinglish language, Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)
[2] Mathur, Puneet and Shah, Rajiv and Sawhney, Ramit and Mahata, Debanjan Detecting offensive tweets in hindi-english code-switched language Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media
[3] Vo, Quan-Hoang and Nguyen, Huy-Tien and Le, Bac and Nguyen, Minh-Le Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)
[4] Hochreiter, Sepp and Schmidhuber, Jürgen Long short-term memory Neural computation 1997
[5] Sinha, R Mahesh K and Thakur, Anil Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)
[6] Pennington, Jeffrey and Socher, Richard and Manning, Christopher Glove: Global vectors for word representation Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)
[7] Zhang, Lei and Wang, Shuai and Liu, Bing Deep learning for sentiment analysis: A survey Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery
[8] Caruana, Rich and Lawrence, Steve and Giles, C Lee Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping Advances in neural information processing systems
[9] Beale, Mark Hudson and Hagan, Martin T and Demuth, Howard B Neural network toolbox user’s guide The MathWorks Incs
[10] Chollet, François and others Keras: The python deep learning library Astrophysics Source Code Library
[11] Wei, Jason and Zou, Kai EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) | HEOT |
5908d7fb6c48f975c5dfc5b19bb0765581df2b25 | 5908d7fb6c48f975c5dfc5b19bb0765581df2b25_0 | Q: How big is the dataset?
Text: Introduction
Hinglish is a linguistic blend of Hindi (very widely spoken language in India) and English (an associate language of urban areas) and is spoken by upwards of 350 million people in India. While the name is based on the Hindi language, it does not refer exclusively to Hindi, but is used in India, with English words blending with Punjabi, Gujarati, Marathi and Hindi. Sometimes, though rarely, Hinglish is used to refer to Hindi written in English script and mixing with English words or phrases. This makes analyzing the language very interesting. Its rampant usage in social media like Twitter, Facebook, Online blogs and reviews has also led to its usage in delivering hate and abuses in similar platforms. We aim to find such content in the social media focusing on the tweets. Hypothetically, if we can classify such tweets, we might be able to detect them and isolate them for further analysis before it reaches public. This will a great application of AI to the social cause and thus is motivating. An example of a simple, non offensive message written in Hinglish could be:
"Why do you waste your time with <redacted content>. Aapna ghar sambhalta nahi(<redacted content>). Chale dusro ko basane..!!"
The second part of the above sentence is written in Hindi while the first part is in English. Second part calls for an action to a person to bring order to his/her home before trying to settle others.
Introduction ::: Modeling challenges
From the modeling perspective there are couple of challenges introduced by the language and the labelled dataset. Generally, Hinglish follows largely fuzzy set of rules which evolves and is dependent upon the users preference. It doesn't have any formal definitions and thus the rules of usage are ambiguous. Thus, when used by different users the text produced may differ. Overall the challenges posed by this problem are:
Geographical variation: Depending upon the geography of origination, the content may be be highly influenced by the underlying region.
Language and phonetics variation: Based on a census in 2001, India has 122 major languages and 1599 other languages. The use of Hindi and English in a code switched setting is highly influenced by these language.
No grammar rules: Hinglish has no fixed set of grammar rules. The rules are inspired from both Hindi and English and when mixed with slur and slang produce large variation.
Spelling variation: There is no agreement on the spellings of the words which are mixed with English. For example to express love, a code mixed spelling, specially when used social platforms might be pyaar, pyar or pyr.
Dataset: Based on some earlier work, only available labelled dataset had 3189 rows of text messages of average length of 116 words and with a range of 1, 1295. Prior work addresses this concern by using Transfer Learning on an architecture learnt on about 14,500 messages with an accuracy of 83.90. We addressed this concern using data augmentation techniques applied on text data.
Related Work ::: Transfer learning based approaches
Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.
The approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work.
Related Work ::: Hybrid models
In another localized setting of Vietnamese language, Nguyen et al. in 2017 proposed a Hybrid multi-channel CNN and LSTM model where they build feature maps for Vietnamese language using CNN to capture shorterm dependencies and LSTM to capture long term dependencies and concatenate both these feature sets to learn a unified set of features on the messages. These concatenated feature vectors are then sent to a few fully connected layers. They achieved an accuracy rate of 87.3% with this architecture.
Dataset and Features
We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:
Dataset and Features ::: Challenges
The obtained data set had many challenges and thus a data preparation task was employed to clean the data and make it ready for the deep learning pipeline. The challenges and processes that were applied are stated below:
Messy text messages: The tweets had urls, punctuations, username mentions, hastags, emoticons, numbers and lots of special characters. These were all cleaned up in a preprocessing cycle to clean the data.
Stop words: Stop words corpus obtained from NLTK was used to eliminate most unproductive words which provide little information about individual tweets.
Transliteration: Followed by above two processes, we translated Hinglish tweets into English words using a two phase process
Transliteration: In phase I, we used translation API's provided by Google translation services and exposed via a SDK, to transliteration the Hinglish messages to English messages.
Translation: After transliteration, words that were specific to Hinglish were translated to English using an Hinglish-English dictionary. By doing this we converted the Hinglish message to and assortment of isolated words being presented in the message in a sequence that can also be represented using word to vector representation.
Data augmentation: Given the data set was very small with a high degree of imbalance in the labelled messages for three different classes, we employed a data augmentation technique to boost the learning of the deep network. Following techniques from the paper by Jason et al. was utilized in this setting that really helped during the training phase.Thsi techniques wasnt used in previous studies. The techniques were:
Synonym Replacement (SR):Randomly choose n words from the sentence that are not stop words. Replace each of these words with one of its synonyms chosen at random.
Random Insertion (RI):Find a random synonym of a random word in the sentence that is not a stop word. Insert that synonym into a random position in the sentence. Do this n times.
Random Swap (RS):Randomly choose two words in the sentence and swap their positions. Do this n times.
Random Deletion (RD):For each word in the sentence, randomly remove it with probability p.
Word Representation: We used word embedding representations by Glove for creating word embedding layers and to obtain the word sequence vector representations of the processed tweets. The pre-trained embedding dimension were one of the hyperparamaters for model. Further more, we introduced another bit flag hyperparameter that determined if to freeze these learnt embedding.
Train-test split: The labelled dataset that was available for this task was very limited in number of examples and thus as noted above few data augmentation techniques were applied to boost the learning of the network. Before applying augmentation, a train-test split of 78%-22% was done from the original, cleansed data set. Thus, 700 tweets/messages were held out for testing. All model evaluation were done in on the test set that got generated by this process. The results presented in this report are based on the performance of the model on the test set. The training set of 2489 messages were however sent to an offline pipeline for augmenting the data. The resulting training dataset was thus 7934 messages. the final distribution of messages for training and test was thus below:
Model Architecture
We tested the performance of various model architectures by running our experiment over 100 times on a CPU based compute which later as migrated to GPU based compute to overcome the slow learning progress. Our universal metric for minimizing was the validation loss and we employed various operational techniques for optimizing on the learning process. These processes and its implementation details will be discussed later but they were learning rate decay, early stopping, model checkpointing and reducing learning rate on plateau.
Model Architecture ::: Loss function
For the loss function we chose categorical cross entropy loss in finding the most optimal weights/parameters of the model. Formally this loss function for the model is defined as below:
The double sum is over the number of observations and the categories respectively. While the model probability is the probability that the observation i belongs to category c.
Model Architecture ::: Models
Among the model architectures we experimented with and without data augmentation were:
Fully Connected dense networks: Model hyperparameters were inspired from the previous work done by Vo et al and Mathur et al. This was also used as a baseline model but we did not get appreciable performance on such architecture due to FC networks not being able to capture local and long term dependencies.
Convolution based architectures: Architecture and hyperparameter choices were chosen from the past study Deon on the subject. We were able to boost the performance as compared to only FC based network but we noticed better performance from architectures that are suitable to sequences such as text messages or any timeseries data.
Sequence models: We used SimpleRNN, LSTM, GRU, Bidirectional LSTM model architecture to capture long term dependencies of the messages in determining the class the message or the tweet belonged to.
Based on all the experiments we conducted below model had best performance related to metrics - Recall rate, F1 score and Overall accuracy.
Model Architecture ::: Hyper parameters
Choice of model parameters were in the above models were inspired from previous work done but then were tuned to the best performance of the Test dataset. Following parameters were considered for tuning.
Learning rate: Based on grid search the best performance was achieved when learning rate was set to 0.01. This value was arrived by a grid search on lr parameter.
Number of Bidirectional LSTM units: A set of 32, 64, 128 hidden activation units were considered for tuning the model. 128 was a choice made by Vo et al in modeling for Vietnamese language but with our experiments and with a small dataset to avoid overfitting to train dataset, a smaller unit sizes were considered.
Embedding dimension: 50, 100 and 200 dimension word representation from Glove word embedding were considered and the best results were obtained with 100d representation, consistent with choices made in the previous work.
Transfer learning on Embedding; Another bit flag for training the embedding on the train data or freezing the embedding from Glove was used. It was determined that set of pre-trained weights from Glove was best when it was fine tuned with Hinglish data. It provides evidence that a separate word or sentence level embedding when learnt for Hinglish text analysis will be very useful.
Number of dense FC layers.
Maximum length of the sequence to be considered: The max length of tweets/message in the dataset was 1265 while average was 116. We determined that choosing 200 resulted in the best performance.
Results
During our experimentation, it was evident that this is a hard problem especially detecting the hate speech, text in a code- mixed language. The best recall rate of 77 % for hate speech was obtained by a Bidirectional LSTM with 32 units with a recurrent drop out rate of 0.2. Precision wise GRU type of RNN sequence model faired better than other kinds for hate speech detection. On the other hand for detecting offensive and non offensive tweets, fairly satisfactory results were obtained. For offensive tweets, 92 % precision was and recall rate of 88% was obtained with GRU versus BiLSTM based models. Comparatively, Recall of 85 % and precision of 76 % was obtained by again GRU and BiLSTM based models as shown and marked in the results.
Conclusion and Future work
The results of the experiments are encouraging on detective offensive vs non offensive tweets and messages written in Hinglish in social media. The utilization of data augmentation technique in this classification task was one of the vital contributions which led us to surpass results obtained by previous state of the art Hybrid CNN-LSTM based models. However, the results of the model for predicting hateful tweets on the contrary brings forth some shortcomings of the model. The biggest shortcoming on the model based on error analysis indicates less than generalized examples presented by the dataset. We also note that the embedding learnt from the Hinglish data set may be lacking and require extensive training to have competent word representations of Hinglish text. Given this learning's, we identify that creating word embeddings on much larger Hinglish corpora may have significant results. We also hypothesize that considering alternate methods than translation and transliteration may prove beneficial.
References
[1] Mathur, Puneet and Sawhney, Ramit and Ayyar, Meghna and Shah, Rajiv, Did you offend me? classification of offensive tweets in hinglish language, Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)
[2] Mathur, Puneet and Shah, Rajiv and Sawhney, Ramit and Mahata, Debanjan Detecting offensive tweets in hindi-english code-switched language Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media
[3] Vo, Quan-Hoang and Nguyen, Huy-Tien and Le, Bac and Nguyen, Minh-Le Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)
[4] Hochreiter, Sepp and Schmidhuber, Jürgen Long short-term memory Neural computation 1997
[5] Sinha, R Mahesh K and Thakur, Anil Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)
[6] Pennington, Jeffrey and Socher, Richard and Manning, Christopher Glove: Global vectors for word representation Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)
[7] Zhang, Lei and Wang, Shuai and Liu, Bing Deep learning for sentiment analysis: A survey Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery
[8] Caruana, Rich and Lawrence, Steve and Giles, C Lee Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping Advances in neural information processing systems
[9] Beale, Mark Hudson and Hagan, Martin T and Demuth, Howard B Neural network toolbox user’s guide The MathWorks Incs
[10] Chollet, François and others Keras: The python deep learning library Astrophysics Source Code Library
[11] Wei, Jason and Zou, Kai EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) | 3189 rows of text messages |
5908d7fb6c48f975c5dfc5b19bb0765581df2b25 | 5908d7fb6c48f975c5dfc5b19bb0765581df2b25_1 | Q: How big is the dataset?
Text: Introduction
Hinglish is a linguistic blend of Hindi (very widely spoken language in India) and English (an associate language of urban areas) and is spoken by upwards of 350 million people in India. While the name is based on the Hindi language, it does not refer exclusively to Hindi, but is used in India, with English words blending with Punjabi, Gujarati, Marathi and Hindi. Sometimes, though rarely, Hinglish is used to refer to Hindi written in English script and mixing with English words or phrases. This makes analyzing the language very interesting. Its rampant usage in social media like Twitter, Facebook, Online blogs and reviews has also led to its usage in delivering hate and abuses in similar platforms. We aim to find such content in the social media focusing on the tweets. Hypothetically, if we can classify such tweets, we might be able to detect them and isolate them for further analysis before it reaches public. This will a great application of AI to the social cause and thus is motivating. An example of a simple, non offensive message written in Hinglish could be:
"Why do you waste your time with <redacted content>. Aapna ghar sambhalta nahi(<redacted content>). Chale dusro ko basane..!!"
The second part of the above sentence is written in Hindi while the first part is in English. Second part calls for an action to a person to bring order to his/her home before trying to settle others.
Introduction ::: Modeling challenges
From the modeling perspective there are couple of challenges introduced by the language and the labelled dataset. Generally, Hinglish follows largely fuzzy set of rules which evolves and is dependent upon the users preference. It doesn't have any formal definitions and thus the rules of usage are ambiguous. Thus, when used by different users the text produced may differ. Overall the challenges posed by this problem are:
Geographical variation: Depending upon the geography of origination, the content may be be highly influenced by the underlying region.
Language and phonetics variation: Based on a census in 2001, India has 122 major languages and 1599 other languages. The use of Hindi and English in a code switched setting is highly influenced by these language.
No grammar rules: Hinglish has no fixed set of grammar rules. The rules are inspired from both Hindi and English and when mixed with slur and slang produce large variation.
Spelling variation: There is no agreement on the spellings of the words which are mixed with English. For example to express love, a code mixed spelling, specially when used social platforms might be pyaar, pyar or pyr.
Dataset: Based on some earlier work, only available labelled dataset had 3189 rows of text messages of average length of 116 words and with a range of 1, 1295. Prior work addresses this concern by using Transfer Learning on an architecture learnt on about 14,500 messages with an accuracy of 83.90. We addressed this concern using data augmentation techniques applied on text data.
Related Work ::: Transfer learning based approaches
Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.
The approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work.
Related Work ::: Hybrid models
In another localized setting of Vietnamese language, Nguyen et al. in 2017 proposed a Hybrid multi-channel CNN and LSTM model where they build feature maps for Vietnamese language using CNN to capture shorterm dependencies and LSTM to capture long term dependencies and concatenate both these feature sets to learn a unified set of features on the messages. These concatenated feature vectors are then sent to a few fully connected layers. They achieved an accuracy rate of 87.3% with this architecture.
Dataset and Features
We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:
Dataset and Features ::: Challenges
The obtained data set had many challenges and thus a data preparation task was employed to clean the data and make it ready for the deep learning pipeline. The challenges and processes that were applied are stated below:
Messy text messages: The tweets had urls, punctuations, username mentions, hastags, emoticons, numbers and lots of special characters. These were all cleaned up in a preprocessing cycle to clean the data.
Stop words: Stop words corpus obtained from NLTK was used to eliminate most unproductive words which provide little information about individual tweets.
Transliteration: Followed by above two processes, we translated Hinglish tweets into English words using a two phase process
Transliteration: In phase I, we used translation API's provided by Google translation services and exposed via a SDK, to transliteration the Hinglish messages to English messages.
Translation: After transliteration, words that were specific to Hinglish were translated to English using an Hinglish-English dictionary. By doing this we converted the Hinglish message to and assortment of isolated words being presented in the message in a sequence that can also be represented using word to vector representation.
Data augmentation: Given the data set was very small with a high degree of imbalance in the labelled messages for three different classes, we employed a data augmentation technique to boost the learning of the deep network. Following techniques from the paper by Jason et al. was utilized in this setting that really helped during the training phase.Thsi techniques wasnt used in previous studies. The techniques were:
Synonym Replacement (SR):Randomly choose n words from the sentence that are not stop words. Replace each of these words with one of its synonyms chosen at random.
Random Insertion (RI):Find a random synonym of a random word in the sentence that is not a stop word. Insert that synonym into a random position in the sentence. Do this n times.
Random Swap (RS):Randomly choose two words in the sentence and swap their positions. Do this n times.
Random Deletion (RD):For each word in the sentence, randomly remove it with probability p.
Word Representation: We used word embedding representations by Glove for creating word embedding layers and to obtain the word sequence vector representations of the processed tweets. The pre-trained embedding dimension were one of the hyperparamaters for model. Further more, we introduced another bit flag hyperparameter that determined if to freeze these learnt embedding.
Train-test split: The labelled dataset that was available for this task was very limited in number of examples and thus as noted above few data augmentation techniques were applied to boost the learning of the network. Before applying augmentation, a train-test split of 78%-22% was done from the original, cleansed data set. Thus, 700 tweets/messages were held out for testing. All model evaluation were done in on the test set that got generated by this process. The results presented in this report are based on the performance of the model on the test set. The training set of 2489 messages were however sent to an offline pipeline for augmenting the data. The resulting training dataset was thus 7934 messages. the final distribution of messages for training and test was thus below:
Model Architecture
We tested the performance of various model architectures by running our experiment over 100 times on a CPU based compute which later as migrated to GPU based compute to overcome the slow learning progress. Our universal metric for minimizing was the validation loss and we employed various operational techniques for optimizing on the learning process. These processes and its implementation details will be discussed later but they were learning rate decay, early stopping, model checkpointing and reducing learning rate on plateau.
Model Architecture ::: Loss function
For the loss function we chose categorical cross entropy loss in finding the most optimal weights/parameters of the model. Formally this loss function for the model is defined as below:
The double sum is over the number of observations and the categories respectively. While the model probability is the probability that the observation i belongs to category c.
Model Architecture ::: Models
Among the model architectures we experimented with and without data augmentation were:
Fully Connected dense networks: Model hyperparameters were inspired from the previous work done by Vo et al and Mathur et al. This was also used as a baseline model but we did not get appreciable performance on such architecture due to FC networks not being able to capture local and long term dependencies.
Convolution based architectures: Architecture and hyperparameter choices were chosen from the past study Deon on the subject. We were able to boost the performance as compared to only FC based network but we noticed better performance from architectures that are suitable to sequences such as text messages or any timeseries data.
Sequence models: We used SimpleRNN, LSTM, GRU, Bidirectional LSTM model architecture to capture long term dependencies of the messages in determining the class the message or the tweet belonged to.
Based on all the experiments we conducted below model had best performance related to metrics - Recall rate, F1 score and Overall accuracy.
Model Architecture ::: Hyper parameters
Choice of model parameters were in the above models were inspired from previous work done but then were tuned to the best performance of the Test dataset. Following parameters were considered for tuning.
Learning rate: Based on grid search the best performance was achieved when learning rate was set to 0.01. This value was arrived by a grid search on lr parameter.
Number of Bidirectional LSTM units: A set of 32, 64, 128 hidden activation units were considered for tuning the model. 128 was a choice made by Vo et al in modeling for Vietnamese language but with our experiments and with a small dataset to avoid overfitting to train dataset, a smaller unit sizes were considered.
Embedding dimension: 50, 100 and 200 dimension word representation from Glove word embedding were considered and the best results were obtained with 100d representation, consistent with choices made in the previous work.
Transfer learning on Embedding; Another bit flag for training the embedding on the train data or freezing the embedding from Glove was used. It was determined that set of pre-trained weights from Glove was best when it was fine tuned with Hinglish data. It provides evidence that a separate word or sentence level embedding when learnt for Hinglish text analysis will be very useful.
Number of dense FC layers.
Maximum length of the sequence to be considered: The max length of tweets/message in the dataset was 1265 while average was 116. We determined that choosing 200 resulted in the best performance.
Results
During our experimentation, it was evident that this is a hard problem especially detecting the hate speech, text in a code- mixed language. The best recall rate of 77 % for hate speech was obtained by a Bidirectional LSTM with 32 units with a recurrent drop out rate of 0.2. Precision wise GRU type of RNN sequence model faired better than other kinds for hate speech detection. On the other hand for detecting offensive and non offensive tweets, fairly satisfactory results were obtained. For offensive tweets, 92 % precision was and recall rate of 88% was obtained with GRU versus BiLSTM based models. Comparatively, Recall of 85 % and precision of 76 % was obtained by again GRU and BiLSTM based models as shown and marked in the results.
Conclusion and Future work
The results of the experiments are encouraging on detective offensive vs non offensive tweets and messages written in Hinglish in social media. The utilization of data augmentation technique in this classification task was one of the vital contributions which led us to surpass results obtained by previous state of the art Hybrid CNN-LSTM based models. However, the results of the model for predicting hateful tweets on the contrary brings forth some shortcomings of the model. The biggest shortcoming on the model based on error analysis indicates less than generalized examples presented by the dataset. We also note that the embedding learnt from the Hinglish data set may be lacking and require extensive training to have competent word representations of Hinglish text. Given this learning's, we identify that creating word embeddings on much larger Hinglish corpora may have significant results. We also hypothesize that considering alternate methods than translation and transliteration may prove beneficial.
References
[1] Mathur, Puneet and Sawhney, Ramit and Ayyar, Meghna and Shah, Rajiv, Did you offend me? classification of offensive tweets in hinglish language, Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)
[2] Mathur, Puneet and Shah, Rajiv and Sawhney, Ramit and Mahata, Debanjan Detecting offensive tweets in hindi-english code-switched language Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media
[3] Vo, Quan-Hoang and Nguyen, Huy-Tien and Le, Bac and Nguyen, Minh-Le Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)
[4] Hochreiter, Sepp and Schmidhuber, Jürgen Long short-term memory Neural computation 1997
[5] Sinha, R Mahesh K and Thakur, Anil Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)
[6] Pennington, Jeffrey and Socher, Richard and Manning, Christopher Glove: Global vectors for word representation Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)
[7] Zhang, Lei and Wang, Shuai and Liu, Bing Deep learning for sentiment analysis: A survey Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery
[8] Caruana, Rich and Lawrence, Steve and Giles, C Lee Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping Advances in neural information processing systems
[9] Beale, Mark Hudson and Hagan, Martin T and Demuth, Howard B Neural network toolbox user’s guide The MathWorks Incs
[10] Chollet, François and others Keras: The python deep learning library Astrophysics Source Code Library
[11] Wei, Jason and Zou, Kai EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) | Resulting dataset was 7934 messages for train and 700 messages for test. |
cca3301f20db16f82b5d65a102436bebc88a2026 | cca3301f20db16f82b5d65a102436bebc88a2026_0 | Q: How is the dataset collected?
Text: Introduction
Hinglish is a linguistic blend of Hindi (very widely spoken language in India) and English (an associate language of urban areas) and is spoken by upwards of 350 million people in India. While the name is based on the Hindi language, it does not refer exclusively to Hindi, but is used in India, with English words blending with Punjabi, Gujarati, Marathi and Hindi. Sometimes, though rarely, Hinglish is used to refer to Hindi written in English script and mixing with English words or phrases. This makes analyzing the language very interesting. Its rampant usage in social media like Twitter, Facebook, Online blogs and reviews has also led to its usage in delivering hate and abuses in similar platforms. We aim to find such content in the social media focusing on the tweets. Hypothetically, if we can classify such tweets, we might be able to detect them and isolate them for further analysis before it reaches public. This will a great application of AI to the social cause and thus is motivating. An example of a simple, non offensive message written in Hinglish could be:
"Why do you waste your time with <redacted content>. Aapna ghar sambhalta nahi(<redacted content>). Chale dusro ko basane..!!"
The second part of the above sentence is written in Hindi while the first part is in English. Second part calls for an action to a person to bring order to his/her home before trying to settle others.
Introduction ::: Modeling challenges
From the modeling perspective there are couple of challenges introduced by the language and the labelled dataset. Generally, Hinglish follows largely fuzzy set of rules which evolves and is dependent upon the users preference. It doesn't have any formal definitions and thus the rules of usage are ambiguous. Thus, when used by different users the text produced may differ. Overall the challenges posed by this problem are:
Geographical variation: Depending upon the geography of origination, the content may be be highly influenced by the underlying region.
Language and phonetics variation: Based on a census in 2001, India has 122 major languages and 1599 other languages. The use of Hindi and English in a code switched setting is highly influenced by these language.
No grammar rules: Hinglish has no fixed set of grammar rules. The rules are inspired from both Hindi and English and when mixed with slur and slang produce large variation.
Spelling variation: There is no agreement on the spellings of the words which are mixed with English. For example to express love, a code mixed spelling, specially when used social platforms might be pyaar, pyar or pyr.
Dataset: Based on some earlier work, only available labelled dataset had 3189 rows of text messages of average length of 116 words and with a range of 1, 1295. Prior work addresses this concern by using Transfer Learning on an architecture learnt on about 14,500 messages with an accuracy of 83.90. We addressed this concern using data augmentation techniques applied on text data.
Related Work ::: Transfer learning based approaches
Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.
The approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work.
Related Work ::: Hybrid models
In another localized setting of Vietnamese language, Nguyen et al. in 2017 proposed a Hybrid multi-channel CNN and LSTM model where they build feature maps for Vietnamese language using CNN to capture shorterm dependencies and LSTM to capture long term dependencies and concatenate both these feature sets to learn a unified set of features on the messages. These concatenated feature vectors are then sent to a few fully connected layers. They achieved an accuracy rate of 87.3% with this architecture.
Dataset and Features
We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:
Dataset and Features ::: Challenges
The obtained data set had many challenges and thus a data preparation task was employed to clean the data and make it ready for the deep learning pipeline. The challenges and processes that were applied are stated below:
Messy text messages: The tweets had urls, punctuations, username mentions, hastags, emoticons, numbers and lots of special characters. These were all cleaned up in a preprocessing cycle to clean the data.
Stop words: Stop words corpus obtained from NLTK was used to eliminate most unproductive words which provide little information about individual tweets.
Transliteration: Followed by above two processes, we translated Hinglish tweets into English words using a two phase process
Transliteration: In phase I, we used translation API's provided by Google translation services and exposed via a SDK, to transliteration the Hinglish messages to English messages.
Translation: After transliteration, words that were specific to Hinglish were translated to English using an Hinglish-English dictionary. By doing this we converted the Hinglish message to and assortment of isolated words being presented in the message in a sequence that can also be represented using word to vector representation.
Data augmentation: Given the data set was very small with a high degree of imbalance in the labelled messages for three different classes, we employed a data augmentation technique to boost the learning of the deep network. Following techniques from the paper by Jason et al. was utilized in this setting that really helped during the training phase.Thsi techniques wasnt used in previous studies. The techniques were:
Synonym Replacement (SR):Randomly choose n words from the sentence that are not stop words. Replace each of these words with one of its synonyms chosen at random.
Random Insertion (RI):Find a random synonym of a random word in the sentence that is not a stop word. Insert that synonym into a random position in the sentence. Do this n times.
Random Swap (RS):Randomly choose two words in the sentence and swap their positions. Do this n times.
Random Deletion (RD):For each word in the sentence, randomly remove it with probability p.
Word Representation: We used word embedding representations by Glove for creating word embedding layers and to obtain the word sequence vector representations of the processed tweets. The pre-trained embedding dimension were one of the hyperparamaters for model. Further more, we introduced another bit flag hyperparameter that determined if to freeze these learnt embedding.
Train-test split: The labelled dataset that was available for this task was very limited in number of examples and thus as noted above few data augmentation techniques were applied to boost the learning of the network. Before applying augmentation, a train-test split of 78%-22% was done from the original, cleansed data set. Thus, 700 tweets/messages were held out for testing. All model evaluation were done in on the test set that got generated by this process. The results presented in this report are based on the performance of the model on the test set. The training set of 2489 messages were however sent to an offline pipeline for augmenting the data. The resulting training dataset was thus 7934 messages. the final distribution of messages for training and test was thus below:
Model Architecture
We tested the performance of various model architectures by running our experiment over 100 times on a CPU based compute which later as migrated to GPU based compute to overcome the slow learning progress. Our universal metric for minimizing was the validation loss and we employed various operational techniques for optimizing on the learning process. These processes and its implementation details will be discussed later but they were learning rate decay, early stopping, model checkpointing and reducing learning rate on plateau.
Model Architecture ::: Loss function
For the loss function we chose categorical cross entropy loss in finding the most optimal weights/parameters of the model. Formally this loss function for the model is defined as below:
The double sum is over the number of observations and the categories respectively. While the model probability is the probability that the observation i belongs to category c.
Model Architecture ::: Models
Among the model architectures we experimented with and without data augmentation were:
Fully Connected dense networks: Model hyperparameters were inspired from the previous work done by Vo et al and Mathur et al. This was also used as a baseline model but we did not get appreciable performance on such architecture due to FC networks not being able to capture local and long term dependencies.
Convolution based architectures: Architecture and hyperparameter choices were chosen from the past study Deon on the subject. We were able to boost the performance as compared to only FC based network but we noticed better performance from architectures that are suitable to sequences such as text messages or any timeseries data.
Sequence models: We used SimpleRNN, LSTM, GRU, Bidirectional LSTM model architecture to capture long term dependencies of the messages in determining the class the message or the tweet belonged to.
Based on all the experiments we conducted below model had best performance related to metrics - Recall rate, F1 score and Overall accuracy.
Model Architecture ::: Hyper parameters
Choice of model parameters were in the above models were inspired from previous work done but then were tuned to the best performance of the Test dataset. Following parameters were considered for tuning.
Learning rate: Based on grid search the best performance was achieved when learning rate was set to 0.01. This value was arrived by a grid search on lr parameter.
Number of Bidirectional LSTM units: A set of 32, 64, 128 hidden activation units were considered for tuning the model. 128 was a choice made by Vo et al in modeling for Vietnamese language but with our experiments and with a small dataset to avoid overfitting to train dataset, a smaller unit sizes were considered.
Embedding dimension: 50, 100 and 200 dimension word representation from Glove word embedding were considered and the best results were obtained with 100d representation, consistent with choices made in the previous work.
Transfer learning on Embedding; Another bit flag for training the embedding on the train data or freezing the embedding from Glove was used. It was determined that set of pre-trained weights from Glove was best when it was fine tuned with Hinglish data. It provides evidence that a separate word or sentence level embedding when learnt for Hinglish text analysis will be very useful.
Number of dense FC layers.
Maximum length of the sequence to be considered: The max length of tweets/message in the dataset was 1265 while average was 116. We determined that choosing 200 resulted in the best performance.
Results
During our experimentation, it was evident that this is a hard problem especially detecting the hate speech, text in a code- mixed language. The best recall rate of 77 % for hate speech was obtained by a Bidirectional LSTM with 32 units with a recurrent drop out rate of 0.2. Precision wise GRU type of RNN sequence model faired better than other kinds for hate speech detection. On the other hand for detecting offensive and non offensive tweets, fairly satisfactory results were obtained. For offensive tweets, 92 % precision was and recall rate of 88% was obtained with GRU versus BiLSTM based models. Comparatively, Recall of 85 % and precision of 76 % was obtained by again GRU and BiLSTM based models as shown and marked in the results.
Conclusion and Future work
The results of the experiments are encouraging on detective offensive vs non offensive tweets and messages written in Hinglish in social media. The utilization of data augmentation technique in this classification task was one of the vital contributions which led us to surpass results obtained by previous state of the art Hybrid CNN-LSTM based models. However, the results of the model for predicting hateful tweets on the contrary brings forth some shortcomings of the model. The biggest shortcoming on the model based on error analysis indicates less than generalized examples presented by the dataset. We also note that the embedding learnt from the Hinglish data set may be lacking and require extensive training to have competent word representations of Hinglish text. Given this learning's, we identify that creating word embeddings on much larger Hinglish corpora may have significant results. We also hypothesize that considering alternate methods than translation and transliteration may prove beneficial.
References
[1] Mathur, Puneet and Sawhney, Ramit and Ayyar, Meghna and Shah, Rajiv, Did you offend me? classification of offensive tweets in hinglish language, Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)
[2] Mathur, Puneet and Shah, Rajiv and Sawhney, Ramit and Mahata, Debanjan Detecting offensive tweets in hindi-english code-switched language Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media
[3] Vo, Quan-Hoang and Nguyen, Huy-Tien and Le, Bac and Nguyen, Minh-Le Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)
[4] Hochreiter, Sepp and Schmidhuber, Jürgen Long short-term memory Neural computation 1997
[5] Sinha, R Mahesh K and Thakur, Anil Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)
[6] Pennington, Jeffrey and Socher, Richard and Manning, Christopher Glove: Global vectors for word representation Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)
[7] Zhang, Lei and Wang, Shuai and Liu, Bing Deep learning for sentiment analysis: A survey Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery
[8] Caruana, Rich and Lawrence, Steve and Giles, C Lee Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping Advances in neural information processing systems
[9] Beale, Mark Hudson and Hagan, Martin T and Demuth, Howard B Neural network toolbox user’s guide The MathWorks Incs
[10] Chollet, François and others Keras: The python deep learning library Astrophysics Source Code Library
[11] Wei, Jason and Zou, Kai EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) | A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al, HEOT obtained from one of the past studies done by Mathur et al |
cfd67b9eeb10e5ad028097d192475d21d0b6845b | cfd67b9eeb10e5ad028097d192475d21d0b6845b_0 | Q: Was each text augmentation technique experimented individually?
Text: Introduction
Hinglish is a linguistic blend of Hindi (very widely spoken language in India) and English (an associate language of urban areas) and is spoken by upwards of 350 million people in India. While the name is based on the Hindi language, it does not refer exclusively to Hindi, but is used in India, with English words blending with Punjabi, Gujarati, Marathi and Hindi. Sometimes, though rarely, Hinglish is used to refer to Hindi written in English script and mixing with English words or phrases. This makes analyzing the language very interesting. Its rampant usage in social media like Twitter, Facebook, Online blogs and reviews has also led to its usage in delivering hate and abuses in similar platforms. We aim to find such content in the social media focusing on the tweets. Hypothetically, if we can classify such tweets, we might be able to detect them and isolate them for further analysis before it reaches public. This will a great application of AI to the social cause and thus is motivating. An example of a simple, non offensive message written in Hinglish could be:
"Why do you waste your time with <redacted content>. Aapna ghar sambhalta nahi(<redacted content>). Chale dusro ko basane..!!"
The second part of the above sentence is written in Hindi while the first part is in English. Second part calls for an action to a person to bring order to his/her home before trying to settle others.
Introduction ::: Modeling challenges
From the modeling perspective there are couple of challenges introduced by the language and the labelled dataset. Generally, Hinglish follows largely fuzzy set of rules which evolves and is dependent upon the users preference. It doesn't have any formal definitions and thus the rules of usage are ambiguous. Thus, when used by different users the text produced may differ. Overall the challenges posed by this problem are:
Geographical variation: Depending upon the geography of origination, the content may be be highly influenced by the underlying region.
Language and phonetics variation: Based on a census in 2001, India has 122 major languages and 1599 other languages. The use of Hindi and English in a code switched setting is highly influenced by these language.
No grammar rules: Hinglish has no fixed set of grammar rules. The rules are inspired from both Hindi and English and when mixed with slur and slang produce large variation.
Spelling variation: There is no agreement on the spellings of the words which are mixed with English. For example to express love, a code mixed spelling, specially when used social platforms might be pyaar, pyar or pyr.
Dataset: Based on some earlier work, only available labelled dataset had 3189 rows of text messages of average length of 116 words and with a range of 1, 1295. Prior work addresses this concern by using Transfer Learning on an architecture learnt on about 14,500 messages with an accuracy of 83.90. We addressed this concern using data augmentation techniques applied on text data.
Related Work ::: Transfer learning based approaches
Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.
The approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work.
Related Work ::: Hybrid models
In another localized setting of Vietnamese language, Nguyen et al. in 2017 proposed a Hybrid multi-channel CNN and LSTM model where they build feature maps for Vietnamese language using CNN to capture shorterm dependencies and LSTM to capture long term dependencies and concatenate both these feature sets to learn a unified set of features on the messages. These concatenated feature vectors are then sent to a few fully connected layers. They achieved an accuracy rate of 87.3% with this architecture.
Dataset and Features
We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:
Dataset and Features ::: Challenges
The obtained data set had many challenges and thus a data preparation task was employed to clean the data and make it ready for the deep learning pipeline. The challenges and processes that were applied are stated below:
Messy text messages: The tweets had urls, punctuations, username mentions, hastags, emoticons, numbers and lots of special characters. These were all cleaned up in a preprocessing cycle to clean the data.
Stop words: Stop words corpus obtained from NLTK was used to eliminate most unproductive words which provide little information about individual tweets.
Transliteration: Followed by above two processes, we translated Hinglish tweets into English words using a two phase process
Transliteration: In phase I, we used translation API's provided by Google translation services and exposed via a SDK, to transliteration the Hinglish messages to English messages.
Translation: After transliteration, words that were specific to Hinglish were translated to English using an Hinglish-English dictionary. By doing this we converted the Hinglish message to and assortment of isolated words being presented in the message in a sequence that can also be represented using word to vector representation.
Data augmentation: Given the data set was very small with a high degree of imbalance in the labelled messages for three different classes, we employed a data augmentation technique to boost the learning of the deep network. Following techniques from the paper by Jason et al. was utilized in this setting that really helped during the training phase.Thsi techniques wasnt used in previous studies. The techniques were:
Synonym Replacement (SR):Randomly choose n words from the sentence that are not stop words. Replace each of these words with one of its synonyms chosen at random.
Random Insertion (RI):Find a random synonym of a random word in the sentence that is not a stop word. Insert that synonym into a random position in the sentence. Do this n times.
Random Swap (RS):Randomly choose two words in the sentence and swap their positions. Do this n times.
Random Deletion (RD):For each word in the sentence, randomly remove it with probability p.
Word Representation: We used word embedding representations by Glove for creating word embedding layers and to obtain the word sequence vector representations of the processed tweets. The pre-trained embedding dimension were one of the hyperparamaters for model. Further more, we introduced another bit flag hyperparameter that determined if to freeze these learnt embedding.
Train-test split: The labelled dataset that was available for this task was very limited in number of examples and thus as noted above few data augmentation techniques were applied to boost the learning of the network. Before applying augmentation, a train-test split of 78%-22% was done from the original, cleansed data set. Thus, 700 tweets/messages were held out for testing. All model evaluation were done in on the test set that got generated by this process. The results presented in this report are based on the performance of the model on the test set. The training set of 2489 messages were however sent to an offline pipeline for augmenting the data. The resulting training dataset was thus 7934 messages. the final distribution of messages for training and test was thus below:
Model Architecture
We tested the performance of various model architectures by running our experiment over 100 times on a CPU based compute which later as migrated to GPU based compute to overcome the slow learning progress. Our universal metric for minimizing was the validation loss and we employed various operational techniques for optimizing on the learning process. These processes and its implementation details will be discussed later but they were learning rate decay, early stopping, model checkpointing and reducing learning rate on plateau.
Model Architecture ::: Loss function
For the loss function we chose categorical cross entropy loss in finding the most optimal weights/parameters of the model. Formally this loss function for the model is defined as below:
The double sum is over the number of observations and the categories respectively. While the model probability is the probability that the observation i belongs to category c.
Model Architecture ::: Models
Among the model architectures we experimented with and without data augmentation were:
Fully Connected dense networks: Model hyperparameters were inspired from the previous work done by Vo et al and Mathur et al. This was also used as a baseline model but we did not get appreciable performance on such architecture due to FC networks not being able to capture local and long term dependencies.
Convolution based architectures: Architecture and hyperparameter choices were chosen from the past study Deon on the subject. We were able to boost the performance as compared to only FC based network but we noticed better performance from architectures that are suitable to sequences such as text messages or any timeseries data.
Sequence models: We used SimpleRNN, LSTM, GRU, Bidirectional LSTM model architecture to capture long term dependencies of the messages in determining the class the message or the tweet belonged to.
Based on all the experiments we conducted below model had best performance related to metrics - Recall rate, F1 score and Overall accuracy.
Model Architecture ::: Hyper parameters
Choice of model parameters were in the above models were inspired from previous work done but then were tuned to the best performance of the Test dataset. Following parameters were considered for tuning.
Learning rate: Based on grid search the best performance was achieved when learning rate was set to 0.01. This value was arrived by a grid search on lr parameter.
Number of Bidirectional LSTM units: A set of 32, 64, 128 hidden activation units were considered for tuning the model. 128 was a choice made by Vo et al in modeling for Vietnamese language but with our experiments and with a small dataset to avoid overfitting to train dataset, a smaller unit sizes were considered.
Embedding dimension: 50, 100 and 200 dimension word representation from Glove word embedding were considered and the best results were obtained with 100d representation, consistent with choices made in the previous work.
Transfer learning on Embedding; Another bit flag for training the embedding on the train data or freezing the embedding from Glove was used. It was determined that set of pre-trained weights from Glove was best when it was fine tuned with Hinglish data. It provides evidence that a separate word or sentence level embedding when learnt for Hinglish text analysis will be very useful.
Number of dense FC layers.
Maximum length of the sequence to be considered: The max length of tweets/message in the dataset was 1265 while average was 116. We determined that choosing 200 resulted in the best performance.
Results
During our experimentation, it was evident that this is a hard problem especially detecting the hate speech, text in a code- mixed language. The best recall rate of 77 % for hate speech was obtained by a Bidirectional LSTM with 32 units with a recurrent drop out rate of 0.2. Precision wise GRU type of RNN sequence model faired better than other kinds for hate speech detection. On the other hand for detecting offensive and non offensive tweets, fairly satisfactory results were obtained. For offensive tweets, 92 % precision was and recall rate of 88% was obtained with GRU versus BiLSTM based models. Comparatively, Recall of 85 % and precision of 76 % was obtained by again GRU and BiLSTM based models as shown and marked in the results.
Conclusion and Future work
The results of the experiments are encouraging on detective offensive vs non offensive tweets and messages written in Hinglish in social media. The utilization of data augmentation technique in this classification task was one of the vital contributions which led us to surpass results obtained by previous state of the art Hybrid CNN-LSTM based models. However, the results of the model for predicting hateful tweets on the contrary brings forth some shortcomings of the model. The biggest shortcoming on the model based on error analysis indicates less than generalized examples presented by the dataset. We also note that the embedding learnt from the Hinglish data set may be lacking and require extensive training to have competent word representations of Hinglish text. Given this learning's, we identify that creating word embeddings on much larger Hinglish corpora may have significant results. We also hypothesize that considering alternate methods than translation and transliteration may prove beneficial.
References
[1] Mathur, Puneet and Sawhney, Ramit and Ayyar, Meghna and Shah, Rajiv, Did you offend me? classification of offensive tweets in hinglish language, Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)
[2] Mathur, Puneet and Shah, Rajiv and Sawhney, Ramit and Mahata, Debanjan Detecting offensive tweets in hindi-english code-switched language Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media
[3] Vo, Quan-Hoang and Nguyen, Huy-Tien and Le, Bac and Nguyen, Minh-Le Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)
[4] Hochreiter, Sepp and Schmidhuber, Jürgen Long short-term memory Neural computation 1997
[5] Sinha, R Mahesh K and Thakur, Anil Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)
[6] Pennington, Jeffrey and Socher, Richard and Manning, Christopher Glove: Global vectors for word representation Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)
[7] Zhang, Lei and Wang, Shuai and Liu, Bing Deep learning for sentiment analysis: A survey Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery
[8] Caruana, Rich and Lawrence, Steve and Giles, C Lee Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping Advances in neural information processing systems
[9] Beale, Mark Hudson and Hagan, Martin T and Demuth, Howard B Neural network toolbox user’s guide The MathWorks Incs
[10] Chollet, François and others Keras: The python deep learning library Astrophysics Source Code Library
[11] Wei, Jason and Zou, Kai EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) | No |
e1c681280b5667671c7f78b1579d0069cba72b0e | e1c681280b5667671c7f78b1579d0069cba72b0e_0 | Q: What models do previous work use?
Text: Introduction
Hinglish is a linguistic blend of Hindi (very widely spoken language in India) and English (an associate language of urban areas) and is spoken by upwards of 350 million people in India. While the name is based on the Hindi language, it does not refer exclusively to Hindi, but is used in India, with English words blending with Punjabi, Gujarati, Marathi and Hindi. Sometimes, though rarely, Hinglish is used to refer to Hindi written in English script and mixing with English words or phrases. This makes analyzing the language very interesting. Its rampant usage in social media like Twitter, Facebook, Online blogs and reviews has also led to its usage in delivering hate and abuses in similar platforms. We aim to find such content in the social media focusing on the tweets. Hypothetically, if we can classify such tweets, we might be able to detect them and isolate them for further analysis before it reaches public. This will a great application of AI to the social cause and thus is motivating. An example of a simple, non offensive message written in Hinglish could be:
"Why do you waste your time with <redacted content>. Aapna ghar sambhalta nahi(<redacted content>). Chale dusro ko basane..!!"
The second part of the above sentence is written in Hindi while the first part is in English. Second part calls for an action to a person to bring order to his/her home before trying to settle others.
Introduction ::: Modeling challenges
From the modeling perspective there are couple of challenges introduced by the language and the labelled dataset. Generally, Hinglish follows largely fuzzy set of rules which evolves and is dependent upon the users preference. It doesn't have any formal definitions and thus the rules of usage are ambiguous. Thus, when used by different users the text produced may differ. Overall the challenges posed by this problem are:
Geographical variation: Depending upon the geography of origination, the content may be be highly influenced by the underlying region.
Language and phonetics variation: Based on a census in 2001, India has 122 major languages and 1599 other languages. The use of Hindi and English in a code switched setting is highly influenced by these language.
No grammar rules: Hinglish has no fixed set of grammar rules. The rules are inspired from both Hindi and English and when mixed with slur and slang produce large variation.
Spelling variation: There is no agreement on the spellings of the words which are mixed with English. For example to express love, a code mixed spelling, specially when used social platforms might be pyaar, pyar or pyr.
Dataset: Based on some earlier work, only available labelled dataset had 3189 rows of text messages of average length of 116 words and with a range of 1, 1295. Prior work addresses this concern by using Transfer Learning on an architecture learnt on about 14,500 messages with an accuracy of 83.90. We addressed this concern using data augmentation techniques applied on text data.
Related Work ::: Transfer learning based approaches
Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.
The approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work.
Related Work ::: Hybrid models
In another localized setting of Vietnamese language, Nguyen et al. in 2017 proposed a Hybrid multi-channel CNN and LSTM model where they build feature maps for Vietnamese language using CNN to capture shorterm dependencies and LSTM to capture long term dependencies and concatenate both these feature sets to learn a unified set of features on the messages. These concatenated feature vectors are then sent to a few fully connected layers. They achieved an accuracy rate of 87.3% with this architecture.
Dataset and Features
We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:
Dataset and Features ::: Challenges
The obtained data set had many challenges and thus a data preparation task was employed to clean the data and make it ready for the deep learning pipeline. The challenges and processes that were applied are stated below:
Messy text messages: The tweets had urls, punctuations, username mentions, hastags, emoticons, numbers and lots of special characters. These were all cleaned up in a preprocessing cycle to clean the data.
Stop words: Stop words corpus obtained from NLTK was used to eliminate most unproductive words which provide little information about individual tweets.
Transliteration: Followed by above two processes, we translated Hinglish tweets into English words using a two phase process
Transliteration: In phase I, we used translation API's provided by Google translation services and exposed via a SDK, to transliteration the Hinglish messages to English messages.
Translation: After transliteration, words that were specific to Hinglish were translated to English using an Hinglish-English dictionary. By doing this we converted the Hinglish message to and assortment of isolated words being presented in the message in a sequence that can also be represented using word to vector representation.
Data augmentation: Given the data set was very small with a high degree of imbalance in the labelled messages for three different classes, we employed a data augmentation technique to boost the learning of the deep network. Following techniques from the paper by Jason et al. was utilized in this setting that really helped during the training phase.Thsi techniques wasnt used in previous studies. The techniques were:
Synonym Replacement (SR):Randomly choose n words from the sentence that are not stop words. Replace each of these words with one of its synonyms chosen at random.
Random Insertion (RI):Find a random synonym of a random word in the sentence that is not a stop word. Insert that synonym into a random position in the sentence. Do this n times.
Random Swap (RS):Randomly choose two words in the sentence and swap their positions. Do this n times.
Random Deletion (RD):For each word in the sentence, randomly remove it with probability p.
Word Representation: We used word embedding representations by Glove for creating word embedding layers and to obtain the word sequence vector representations of the processed tweets. The pre-trained embedding dimension were one of the hyperparamaters for model. Further more, we introduced another bit flag hyperparameter that determined if to freeze these learnt embedding.
Train-test split: The labelled dataset that was available for this task was very limited in number of examples and thus as noted above few data augmentation techniques were applied to boost the learning of the network. Before applying augmentation, a train-test split of 78%-22% was done from the original, cleansed data set. Thus, 700 tweets/messages were held out for testing. All model evaluation were done in on the test set that got generated by this process. The results presented in this report are based on the performance of the model on the test set. The training set of 2489 messages were however sent to an offline pipeline for augmenting the data. The resulting training dataset was thus 7934 messages. the final distribution of messages for training and test was thus below:
Model Architecture
We tested the performance of various model architectures by running our experiment over 100 times on a CPU based compute which later as migrated to GPU based compute to overcome the slow learning progress. Our universal metric for minimizing was the validation loss and we employed various operational techniques for optimizing on the learning process. These processes and its implementation details will be discussed later but they were learning rate decay, early stopping, model checkpointing and reducing learning rate on plateau.
Model Architecture ::: Loss function
For the loss function we chose categorical cross entropy loss in finding the most optimal weights/parameters of the model. Formally this loss function for the model is defined as below:
The double sum is over the number of observations and the categories respectively. While the model probability is the probability that the observation i belongs to category c.
Model Architecture ::: Models
Among the model architectures we experimented with and without data augmentation were:
Fully Connected dense networks: Model hyperparameters were inspired from the previous work done by Vo et al and Mathur et al. This was also used as a baseline model but we did not get appreciable performance on such architecture due to FC networks not being able to capture local and long term dependencies.
Convolution based architectures: Architecture and hyperparameter choices were chosen from the past study Deon on the subject. We were able to boost the performance as compared to only FC based network but we noticed better performance from architectures that are suitable to sequences such as text messages or any timeseries data.
Sequence models: We used SimpleRNN, LSTM, GRU, Bidirectional LSTM model architecture to capture long term dependencies of the messages in determining the class the message or the tweet belonged to.
Based on all the experiments we conducted below model had best performance related to metrics - Recall rate, F1 score and Overall accuracy.
Model Architecture ::: Hyper parameters
Choice of model parameters were in the above models were inspired from previous work done but then were tuned to the best performance of the Test dataset. Following parameters were considered for tuning.
Learning rate: Based on grid search the best performance was achieved when learning rate was set to 0.01. This value was arrived by a grid search on lr parameter.
Number of Bidirectional LSTM units: A set of 32, 64, 128 hidden activation units were considered for tuning the model. 128 was a choice made by Vo et al in modeling for Vietnamese language but with our experiments and with a small dataset to avoid overfitting to train dataset, a smaller unit sizes were considered.
Embedding dimension: 50, 100 and 200 dimension word representation from Glove word embedding were considered and the best results were obtained with 100d representation, consistent with choices made in the previous work.
Transfer learning on Embedding; Another bit flag for training the embedding on the train data or freezing the embedding from Glove was used. It was determined that set of pre-trained weights from Glove was best when it was fine tuned with Hinglish data. It provides evidence that a separate word or sentence level embedding when learnt for Hinglish text analysis will be very useful.
Number of dense FC layers.
Maximum length of the sequence to be considered: The max length of tweets/message in the dataset was 1265 while average was 116. We determined that choosing 200 resulted in the best performance.
Results
During our experimentation, it was evident that this is a hard problem especially detecting the hate speech, text in a code- mixed language. The best recall rate of 77 % for hate speech was obtained by a Bidirectional LSTM with 32 units with a recurrent drop out rate of 0.2. Precision wise GRU type of RNN sequence model faired better than other kinds for hate speech detection. On the other hand for detecting offensive and non offensive tweets, fairly satisfactory results were obtained. For offensive tweets, 92 % precision was and recall rate of 88% was obtained with GRU versus BiLSTM based models. Comparatively, Recall of 85 % and precision of 76 % was obtained by again GRU and BiLSTM based models as shown and marked in the results.
Conclusion and Future work
The results of the experiments are encouraging on detective offensive vs non offensive tweets and messages written in Hinglish in social media. The utilization of data augmentation technique in this classification task was one of the vital contributions which led us to surpass results obtained by previous state of the art Hybrid CNN-LSTM based models. However, the results of the model for predicting hateful tweets on the contrary brings forth some shortcomings of the model. The biggest shortcoming on the model based on error analysis indicates less than generalized examples presented by the dataset. We also note that the embedding learnt from the Hinglish data set may be lacking and require extensive training to have competent word representations of Hinglish text. Given this learning's, we identify that creating word embeddings on much larger Hinglish corpora may have significant results. We also hypothesize that considering alternate methods than translation and transliteration may prove beneficial.
References
[1] Mathur, Puneet and Sawhney, Ramit and Ayyar, Meghna and Shah, Rajiv, Did you offend me? classification of offensive tweets in hinglish language, Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)
[2] Mathur, Puneet and Shah, Rajiv and Sawhney, Ramit and Mahata, Debanjan Detecting offensive tweets in hindi-english code-switched language Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media
[3] Vo, Quan-Hoang and Nguyen, Huy-Tien and Le, Bac and Nguyen, Minh-Le Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)
[4] Hochreiter, Sepp and Schmidhuber, Jürgen Long short-term memory Neural computation 1997
[5] Sinha, R Mahesh K and Thakur, Anil Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)
[6] Pennington, Jeffrey and Socher, Richard and Manning, Christopher Glove: Global vectors for word representation Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)
[7] Zhang, Lei and Wang, Shuai and Liu, Bing Deep learning for sentiment analysis: A survey Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery
[8] Caruana, Rich and Lawrence, Steve and Giles, C Lee Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping Advances in neural information processing systems
[9] Beale, Mark Hudson and Hagan, Martin T and Demuth, Howard B Neural network toolbox user’s guide The MathWorks Incs
[10] Chollet, François and others Keras: The python deep learning library Astrophysics Source Code Library
[11] Wei, Jason and Zou, Kai EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) | Ternary Trans-CNN , Hybrid multi-channel CNN and LSTM |
58d50567df71fa6c3792a0964160af390556757d | 58d50567df71fa6c3792a0964160af390556757d_0 | Q: Does the dataset contain content from various social media platforms?
Text: Introduction
Hinglish is a linguistic blend of Hindi (very widely spoken language in India) and English (an associate language of urban areas) and is spoken by upwards of 350 million people in India. While the name is based on the Hindi language, it does not refer exclusively to Hindi, but is used in India, with English words blending with Punjabi, Gujarati, Marathi and Hindi. Sometimes, though rarely, Hinglish is used to refer to Hindi written in English script and mixing with English words or phrases. This makes analyzing the language very interesting. Its rampant usage in social media like Twitter, Facebook, Online blogs and reviews has also led to its usage in delivering hate and abuses in similar platforms. We aim to find such content in the social media focusing on the tweets. Hypothetically, if we can classify such tweets, we might be able to detect them and isolate them for further analysis before it reaches public. This will a great application of AI to the social cause and thus is motivating. An example of a simple, non offensive message written in Hinglish could be:
"Why do you waste your time with <redacted content>. Aapna ghar sambhalta nahi(<redacted content>). Chale dusro ko basane..!!"
The second part of the above sentence is written in Hindi while the first part is in English. Second part calls for an action to a person to bring order to his/her home before trying to settle others.
Introduction ::: Modeling challenges
From the modeling perspective there are couple of challenges introduced by the language and the labelled dataset. Generally, Hinglish follows largely fuzzy set of rules which evolves and is dependent upon the users preference. It doesn't have any formal definitions and thus the rules of usage are ambiguous. Thus, when used by different users the text produced may differ. Overall the challenges posed by this problem are:
Geographical variation: Depending upon the geography of origination, the content may be be highly influenced by the underlying region.
Language and phonetics variation: Based on a census in 2001, India has 122 major languages and 1599 other languages. The use of Hindi and English in a code switched setting is highly influenced by these language.
No grammar rules: Hinglish has no fixed set of grammar rules. The rules are inspired from both Hindi and English and when mixed with slur and slang produce large variation.
Spelling variation: There is no agreement on the spellings of the words which are mixed with English. For example to express love, a code mixed spelling, specially when used social platforms might be pyaar, pyar or pyr.
Dataset: Based on some earlier work, only available labelled dataset had 3189 rows of text messages of average length of 116 words and with a range of 1, 1295. Prior work addresses this concern by using Transfer Learning on an architecture learnt on about 14,500 messages with an accuracy of 83.90. We addressed this concern using data augmentation techniques applied on text data.
Related Work ::: Transfer learning based approaches
Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.
The approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work.
Related Work ::: Hybrid models
In another localized setting of Vietnamese language, Nguyen et al. in 2017 proposed a Hybrid multi-channel CNN and LSTM model where they build feature maps for Vietnamese language using CNN to capture shorterm dependencies and LSTM to capture long term dependencies and concatenate both these feature sets to learn a unified set of features on the messages. These concatenated feature vectors are then sent to a few fully connected layers. They achieved an accuracy rate of 87.3% with this architecture.
Dataset and Features
We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:
Dataset and Features ::: Challenges
The obtained data set had many challenges and thus a data preparation task was employed to clean the data and make it ready for the deep learning pipeline. The challenges and processes that were applied are stated below:
Messy text messages: The tweets had urls, punctuations, username mentions, hastags, emoticons, numbers and lots of special characters. These were all cleaned up in a preprocessing cycle to clean the data.
Stop words: Stop words corpus obtained from NLTK was used to eliminate most unproductive words which provide little information about individual tweets.
Transliteration: Followed by above two processes, we translated Hinglish tweets into English words using a two phase process
Transliteration: In phase I, we used translation API's provided by Google translation services and exposed via a SDK, to transliteration the Hinglish messages to English messages.
Translation: After transliteration, words that were specific to Hinglish were translated to English using an Hinglish-English dictionary. By doing this we converted the Hinglish message to and assortment of isolated words being presented in the message in a sequence that can also be represented using word to vector representation.
Data augmentation: Given the data set was very small with a high degree of imbalance in the labelled messages for three different classes, we employed a data augmentation technique to boost the learning of the deep network. Following techniques from the paper by Jason et al. was utilized in this setting that really helped during the training phase.Thsi techniques wasnt used in previous studies. The techniques were:
Synonym Replacement (SR):Randomly choose n words from the sentence that are not stop words. Replace each of these words with one of its synonyms chosen at random.
Random Insertion (RI):Find a random synonym of a random word in the sentence that is not a stop word. Insert that synonym into a random position in the sentence. Do this n times.
Random Swap (RS):Randomly choose two words in the sentence and swap their positions. Do this n times.
Random Deletion (RD):For each word in the sentence, randomly remove it with probability p.
Word Representation: We used word embedding representations by Glove for creating word embedding layers and to obtain the word sequence vector representations of the processed tweets. The pre-trained embedding dimension were one of the hyperparamaters for model. Further more, we introduced another bit flag hyperparameter that determined if to freeze these learnt embedding.
Train-test split: The labelled dataset that was available for this task was very limited in number of examples and thus as noted above few data augmentation techniques were applied to boost the learning of the network. Before applying augmentation, a train-test split of 78%-22% was done from the original, cleansed data set. Thus, 700 tweets/messages were held out for testing. All model evaluation were done in on the test set that got generated by this process. The results presented in this report are based on the performance of the model on the test set. The training set of 2489 messages were however sent to an offline pipeline for augmenting the data. The resulting training dataset was thus 7934 messages. the final distribution of messages for training and test was thus below:
Model Architecture
We tested the performance of various model architectures by running our experiment over 100 times on a CPU based compute which later as migrated to GPU based compute to overcome the slow learning progress. Our universal metric for minimizing was the validation loss and we employed various operational techniques for optimizing on the learning process. These processes and its implementation details will be discussed later but they were learning rate decay, early stopping, model checkpointing and reducing learning rate on plateau.
Model Architecture ::: Loss function
For the loss function we chose categorical cross entropy loss in finding the most optimal weights/parameters of the model. Formally this loss function for the model is defined as below:
The double sum is over the number of observations and the categories respectively. While the model probability is the probability that the observation i belongs to category c.
Model Architecture ::: Models
Among the model architectures we experimented with and without data augmentation were:
Fully Connected dense networks: Model hyperparameters were inspired from the previous work done by Vo et al and Mathur et al. This was also used as a baseline model but we did not get appreciable performance on such architecture due to FC networks not being able to capture local and long term dependencies.
Convolution based architectures: Architecture and hyperparameter choices were chosen from the past study Deon on the subject. We were able to boost the performance as compared to only FC based network but we noticed better performance from architectures that are suitable to sequences such as text messages or any timeseries data.
Sequence models: We used SimpleRNN, LSTM, GRU, Bidirectional LSTM model architecture to capture long term dependencies of the messages in determining the class the message or the tweet belonged to.
Based on all the experiments we conducted below model had best performance related to metrics - Recall rate, F1 score and Overall accuracy.
Model Architecture ::: Hyper parameters
Choice of model parameters were in the above models were inspired from previous work done but then were tuned to the best performance of the Test dataset. Following parameters were considered for tuning.
Learning rate: Based on grid search the best performance was achieved when learning rate was set to 0.01. This value was arrived by a grid search on lr parameter.
Number of Bidirectional LSTM units: A set of 32, 64, 128 hidden activation units were considered for tuning the model. 128 was a choice made by Vo et al in modeling for Vietnamese language but with our experiments and with a small dataset to avoid overfitting to train dataset, a smaller unit sizes were considered.
Embedding dimension: 50, 100 and 200 dimension word representation from Glove word embedding were considered and the best results were obtained with 100d representation, consistent with choices made in the previous work.
Transfer learning on Embedding; Another bit flag for training the embedding on the train data or freezing the embedding from Glove was used. It was determined that set of pre-trained weights from Glove was best when it was fine tuned with Hinglish data. It provides evidence that a separate word or sentence level embedding when learnt for Hinglish text analysis will be very useful.
Number of dense FC layers.
Maximum length of the sequence to be considered: The max length of tweets/message in the dataset was 1265 while average was 116. We determined that choosing 200 resulted in the best performance.
Results
During our experimentation, it was evident that this is a hard problem especially detecting the hate speech, text in a code- mixed language. The best recall rate of 77 % for hate speech was obtained by a Bidirectional LSTM with 32 units with a recurrent drop out rate of 0.2. Precision wise GRU type of RNN sequence model faired better than other kinds for hate speech detection. On the other hand for detecting offensive and non offensive tweets, fairly satisfactory results were obtained. For offensive tweets, 92 % precision was and recall rate of 88% was obtained with GRU versus BiLSTM based models. Comparatively, Recall of 85 % and precision of 76 % was obtained by again GRU and BiLSTM based models as shown and marked in the results.
Conclusion and Future work
The results of the experiments are encouraging on detective offensive vs non offensive tweets and messages written in Hinglish in social media. The utilization of data augmentation technique in this classification task was one of the vital contributions which led us to surpass results obtained by previous state of the art Hybrid CNN-LSTM based models. However, the results of the model for predicting hateful tweets on the contrary brings forth some shortcomings of the model. The biggest shortcoming on the model based on error analysis indicates less than generalized examples presented by the dataset. We also note that the embedding learnt from the Hinglish data set may be lacking and require extensive training to have competent word representations of Hinglish text. Given this learning's, we identify that creating word embeddings on much larger Hinglish corpora may have significant results. We also hypothesize that considering alternate methods than translation and transliteration may prove beneficial.
References
[1] Mathur, Puneet and Sawhney, Ramit and Ayyar, Meghna and Shah, Rajiv, Did you offend me? classification of offensive tweets in hinglish language, Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)
[2] Mathur, Puneet and Shah, Rajiv and Sawhney, Ramit and Mahata, Debanjan Detecting offensive tweets in hindi-english code-switched language Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media
[3] Vo, Quan-Hoang and Nguyen, Huy-Tien and Le, Bac and Nguyen, Minh-Le Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)
[4] Hochreiter, Sepp and Schmidhuber, Jürgen Long short-term memory Neural computation 1997
[5] Sinha, R Mahesh K and Thakur, Anil Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)
[6] Pennington, Jeffrey and Socher, Richard and Manning, Christopher Glove: Global vectors for word representation Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)
[7] Zhang, Lei and Wang, Shuai and Liu, Bing Deep learning for sentiment analysis: A survey Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery
[8] Caruana, Rich and Lawrence, Steve and Giles, C Lee Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping Advances in neural information processing systems
[9] Beale, Mark Hudson and Hagan, Martin T and Demuth, Howard B Neural network toolbox user’s guide The MathWorks Incs
[10] Chollet, François and others Keras: The python deep learning library Astrophysics Source Code Library
[11] Wei, Jason and Zou, Kai EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) | No |
07c79edd4c29635dbc1c2c32b8df68193b7701c6 | 07c79edd4c29635dbc1c2c32b8df68193b7701c6_0 | Q: What dataset is used?
Text: Introduction
Hinglish is a linguistic blend of Hindi (very widely spoken language in India) and English (an associate language of urban areas) and is spoken by upwards of 350 million people in India. While the name is based on the Hindi language, it does not refer exclusively to Hindi, but is used in India, with English words blending with Punjabi, Gujarati, Marathi and Hindi. Sometimes, though rarely, Hinglish is used to refer to Hindi written in English script and mixing with English words or phrases. This makes analyzing the language very interesting. Its rampant usage in social media like Twitter, Facebook, Online blogs and reviews has also led to its usage in delivering hate and abuses in similar platforms. We aim to find such content in the social media focusing on the tweets. Hypothetically, if we can classify such tweets, we might be able to detect them and isolate them for further analysis before it reaches public. This will a great application of AI to the social cause and thus is motivating. An example of a simple, non offensive message written in Hinglish could be:
"Why do you waste your time with <redacted content>. Aapna ghar sambhalta nahi(<redacted content>). Chale dusro ko basane..!!"
The second part of the above sentence is written in Hindi while the first part is in English. Second part calls for an action to a person to bring order to his/her home before trying to settle others.
Introduction ::: Modeling challenges
From the modeling perspective there are couple of challenges introduced by the language and the labelled dataset. Generally, Hinglish follows largely fuzzy set of rules which evolves and is dependent upon the users preference. It doesn't have any formal definitions and thus the rules of usage are ambiguous. Thus, when used by different users the text produced may differ. Overall the challenges posed by this problem are:
Geographical variation: Depending upon the geography of origination, the content may be be highly influenced by the underlying region.
Language and phonetics variation: Based on a census in 2001, India has 122 major languages and 1599 other languages. The use of Hindi and English in a code switched setting is highly influenced by these language.
No grammar rules: Hinglish has no fixed set of grammar rules. The rules are inspired from both Hindi and English and when mixed with slur and slang produce large variation.
Spelling variation: There is no agreement on the spellings of the words which are mixed with English. For example to express love, a code mixed spelling, specially when used social platforms might be pyaar, pyar or pyr.
Dataset: Based on some earlier work, only available labelled dataset had 3189 rows of text messages of average length of 116 words and with a range of 1, 1295. Prior work addresses this concern by using Transfer Learning on an architecture learnt on about 14,500 messages with an accuracy of 83.90. We addressed this concern using data augmentation techniques applied on text data.
Related Work ::: Transfer learning based approaches
Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation. They were able to train this network on a parallel English dataset provided by Davidson et al. The authors were able to achieve Accuracy of 83.9%, Precision of 80.2%, Recall of 69.8%.
The approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results. The results were poor than what the original authors achieved. But, most of the model hyper-parameter choices where inspired from this work.
Related Work ::: Hybrid models
In another localized setting of Vietnamese language, Nguyen et al. in 2017 proposed a Hybrid multi-channel CNN and LSTM model where they build feature maps for Vietnamese language using CNN to capture shorterm dependencies and LSTM to capture long term dependencies and concatenate both these feature sets to learn a unified set of features on the messages. These concatenated feature vectors are then sent to a few fully connected layers. They achieved an accuracy rate of 87.3% with this architecture.
Dataset and Features
We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:
Dataset and Features ::: Challenges
The obtained data set had many challenges and thus a data preparation task was employed to clean the data and make it ready for the deep learning pipeline. The challenges and processes that were applied are stated below:
Messy text messages: The tweets had urls, punctuations, username mentions, hastags, emoticons, numbers and lots of special characters. These were all cleaned up in a preprocessing cycle to clean the data.
Stop words: Stop words corpus obtained from NLTK was used to eliminate most unproductive words which provide little information about individual tweets.
Transliteration: Followed by above two processes, we translated Hinglish tweets into English words using a two phase process
Transliteration: In phase I, we used translation API's provided by Google translation services and exposed via a SDK, to transliteration the Hinglish messages to English messages.
Translation: After transliteration, words that were specific to Hinglish were translated to English using an Hinglish-English dictionary. By doing this we converted the Hinglish message to and assortment of isolated words being presented in the message in a sequence that can also be represented using word to vector representation.
Data augmentation: Given the data set was very small with a high degree of imbalance in the labelled messages for three different classes, we employed a data augmentation technique to boost the learning of the deep network. Following techniques from the paper by Jason et al. was utilized in this setting that really helped during the training phase.Thsi techniques wasnt used in previous studies. The techniques were:
Synonym Replacement (SR):Randomly choose n words from the sentence that are not stop words. Replace each of these words with one of its synonyms chosen at random.
Random Insertion (RI):Find a random synonym of a random word in the sentence that is not a stop word. Insert that synonym into a random position in the sentence. Do this n times.
Random Swap (RS):Randomly choose two words in the sentence and swap their positions. Do this n times.
Random Deletion (RD):For each word in the sentence, randomly remove it with probability p.
Word Representation: We used word embedding representations by Glove for creating word embedding layers and to obtain the word sequence vector representations of the processed tweets. The pre-trained embedding dimension were one of the hyperparamaters for model. Further more, we introduced another bit flag hyperparameter that determined if to freeze these learnt embedding.
Train-test split: The labelled dataset that was available for this task was very limited in number of examples and thus as noted above few data augmentation techniques were applied to boost the learning of the network. Before applying augmentation, a train-test split of 78%-22% was done from the original, cleansed data set. Thus, 700 tweets/messages were held out for testing. All model evaluation were done in on the test set that got generated by this process. The results presented in this report are based on the performance of the model on the test set. The training set of 2489 messages were however sent to an offline pipeline for augmenting the data. The resulting training dataset was thus 7934 messages. the final distribution of messages for training and test was thus below:
Model Architecture
We tested the performance of various model architectures by running our experiment over 100 times on a CPU based compute which later as migrated to GPU based compute to overcome the slow learning progress. Our universal metric for minimizing was the validation loss and we employed various operational techniques for optimizing on the learning process. These processes and its implementation details will be discussed later but they were learning rate decay, early stopping, model checkpointing and reducing learning rate on plateau.
Model Architecture ::: Loss function
For the loss function we chose categorical cross entropy loss in finding the most optimal weights/parameters of the model. Formally this loss function for the model is defined as below:
The double sum is over the number of observations and the categories respectively. While the model probability is the probability that the observation i belongs to category c.
Model Architecture ::: Models
Among the model architectures we experimented with and without data augmentation were:
Fully Connected dense networks: Model hyperparameters were inspired from the previous work done by Vo et al and Mathur et al. This was also used as a baseline model but we did not get appreciable performance on such architecture due to FC networks not being able to capture local and long term dependencies.
Convolution based architectures: Architecture and hyperparameter choices were chosen from the past study Deon on the subject. We were able to boost the performance as compared to only FC based network but we noticed better performance from architectures that are suitable to sequences such as text messages or any timeseries data.
Sequence models: We used SimpleRNN, LSTM, GRU, Bidirectional LSTM model architecture to capture long term dependencies of the messages in determining the class the message or the tweet belonged to.
Based on all the experiments we conducted below model had best performance related to metrics - Recall rate, F1 score and Overall accuracy.
Model Architecture ::: Hyper parameters
Choice of model parameters were in the above models were inspired from previous work done but then were tuned to the best performance of the Test dataset. Following parameters were considered for tuning.
Learning rate: Based on grid search the best performance was achieved when learning rate was set to 0.01. This value was arrived by a grid search on lr parameter.
Number of Bidirectional LSTM units: A set of 32, 64, 128 hidden activation units were considered for tuning the model. 128 was a choice made by Vo et al in modeling for Vietnamese language but with our experiments and with a small dataset to avoid overfitting to train dataset, a smaller unit sizes were considered.
Embedding dimension: 50, 100 and 200 dimension word representation from Glove word embedding were considered and the best results were obtained with 100d representation, consistent with choices made in the previous work.
Transfer learning on Embedding; Another bit flag for training the embedding on the train data or freezing the embedding from Glove was used. It was determined that set of pre-trained weights from Glove was best when it was fine tuned with Hinglish data. It provides evidence that a separate word or sentence level embedding when learnt for Hinglish text analysis will be very useful.
Number of dense FC layers.
Maximum length of the sequence to be considered: The max length of tweets/message in the dataset was 1265 while average was 116. We determined that choosing 200 resulted in the best performance.
Results
During our experimentation, it was evident that this is a hard problem especially detecting the hate speech, text in a code- mixed language. The best recall rate of 77 % for hate speech was obtained by a Bidirectional LSTM with 32 units with a recurrent drop out rate of 0.2. Precision wise GRU type of RNN sequence model faired better than other kinds for hate speech detection. On the other hand for detecting offensive and non offensive tweets, fairly satisfactory results were obtained. For offensive tweets, 92 % precision was and recall rate of 88% was obtained with GRU versus BiLSTM based models. Comparatively, Recall of 85 % and precision of 76 % was obtained by again GRU and BiLSTM based models as shown and marked in the results.
Conclusion and Future work
The results of the experiments are encouraging on detective offensive vs non offensive tweets and messages written in Hinglish in social media. The utilization of data augmentation technique in this classification task was one of the vital contributions which led us to surpass results obtained by previous state of the art Hybrid CNN-LSTM based models. However, the results of the model for predicting hateful tweets on the contrary brings forth some shortcomings of the model. The biggest shortcoming on the model based on error analysis indicates less than generalized examples presented by the dataset. We also note that the embedding learnt from the Hinglish data set may be lacking and require extensive training to have competent word representations of Hinglish text. Given this learning's, we identify that creating word embeddings on much larger Hinglish corpora may have significant results. We also hypothesize that considering alternate methods than translation and transliteration may prove beneficial.
References
[1] Mathur, Puneet and Sawhney, Ramit and Ayyar, Meghna and Shah, Rajiv, Did you offend me? classification of offensive tweets in hinglish language, Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)
[2] Mathur, Puneet and Shah, Rajiv and Sawhney, Ramit and Mahata, Debanjan Detecting offensive tweets in hindi-english code-switched language Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media
[3] Vo, Quan-Hoang and Nguyen, Huy-Tien and Le, Bac and Nguyen, Minh-Le Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)
[4] Hochreiter, Sepp and Schmidhuber, Jürgen Long short-term memory Neural computation 1997
[5] Sinha, R Mahesh K and Thakur, Anil Multi-channel LSTM-CNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering (KSE)
[6] Pennington, Jeffrey and Socher, Richard and Manning, Christopher Glove: Global vectors for word representation Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)
[7] Zhang, Lei and Wang, Shuai and Liu, Bing Deep learning for sentiment analysis: A survey Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery
[8] Caruana, Rich and Lawrence, Steve and Giles, C Lee Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping Advances in neural information processing systems
[9] Beale, Mark Hudson and Hagan, Martin T and Demuth, Howard B Neural network toolbox user’s guide The MathWorks Incs
[10] Chollet, François and others Keras: The python deep learning library Astrophysics Source Code Library
[11] Wei, Jason and Zou, Kai EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) | HEOT , A labelled dataset for a corresponding english tweets |
66125cfdf11d3bf8e59728428e02021177142c3a | 66125cfdf11d3bf8e59728428e02021177142c3a_0 | Q: How they demonstrate that language-neutral component is sufficiently general in terms of modeling semantics to allow high-accuracy word-alignment?
Text: Introduction
Multilingual BERT (mBERT; BIBREF0) is gaining popularity as a contextual representation for various multilingual tasks, such as dependency parsing BIBREF1, BIBREF2, cross-lingual natural language inference (XNLI) or named-entity recognition (NER) BIBREF3, BIBREF4, BIBREF5.
BIBREF3 present an exploratory paper showing that mBERT can be used cross-lingually for zero-shot transfer in morphological and syntactic tasks, at least for typologically similar languages. They also study an interesting semantic task, sentence-retrieval, with promising initial results. Their work leaves many open questions in terms of how good the cross-lingual mBERT representation is for semantics, motivating our work.
In this paper, we directly assess the semantic cross-lingual properties of mBERT. To avoid methodological issues with zero-shot transfer (possible language overfitting, hyper-parameter tuning), we selected tasks that only involve a direct comparison of the representations: cross-lingual sentence retrieval, word alignment, and machine translation quality estimation (MT QE). Additionally, we explore how the language is represented in the embeddings by training language identification classifiers and assessing how the representation similarity corresponds to phylogenetic language families.
Our results show that the mBERT representations, even after language-agnostic fine-tuning, are not very language-neutral. However, the identity of the language can be approximated as a constant shift in the representation space. An even higher language-neutrality can still be achieved by a linear projection fitted on a small amount of parallel data.
Finally, we present attempts to strengthen the language-neutral component via fine-tuning: first, for multi-lingual syntactic and morphological analysis; second, towards language identity removal via a adversarial classifier.
Related Work
Since the publication of mBERT BIBREF0, many positive experimental results were published.
BIBREF2 reached impressive results in zero-shot dependency parsing. However, the representation used for the parser was a bilingual projection of the contextual embeddings based on word-alignment trained on parallel data.
BIBREF3 recently examined the cross-lingual properties of mBERT on zero-shot NER and part-of-speech (POS) tagging but the success of zero-shot transfer strongly depends on how typologically similar the languages are. Similarly, BIBREF4 trained good multilingual models for POS tagging, NER, and XNLI, but struggled to achieve good results in the zero-shot setup.
BIBREF3 assessed mBERT on cross-lingual sentence retrieval between three language pairs. They observed that if they subtract the average difference between the embeddings from the target language representation, the retrieval accuracy significantly increases. We systematically study this idea in the later sections.
Many experiments show BIBREF4, BIBREF5, BIBREF1 that downstream task models can extract relevant features from the multilingual representations. But these results do not directly show language-neutrality, i.e., to what extent are similar phenomena are represented similarly across languages. The models can obtain the task-specific information based on the knowledge of the language, which (as we show later) can be easily identified. Our choice of evaluation tasks eliminates this risk by directly comparing the representations. Limited success in zero-shot setups and the need for explicit bilingual projection in order to work well BIBREF3, BIBREF4, BIBREF6 also shows limited language neutrality of mBERT.
Centering mBERT Representations
Following BIBREF3, we hypothesize that a sentence representation in mBERT is composed of a language-specific component, which identifies the language of the sentence, and a language-neutral component, which captures the meaning of the sentence in a language-independent way. We assume that the language-specific component is similar across all sentences in the language.
We thus try to remove the language-specific information from the representations by centering the representations of sentences in each language so that their average lies at the origin of the vector space. We do this by estimating the language centroid as the mean of the mBERT representations for a set of sentences in that language and subtracting the language centroid from the contextual embeddings.
We then analyze the semantic properties of both the original and the centered representations using a range of probing tasks. For all tasks, we test all layers of the model. For tasks utilizing a single-vector sentence representation, we test both the vector corresponding to the [cls] token and mean-pooled states.
Probing Tasks
We employ five probing tasks to evaluate the language neutrality of the representations.
Probing Tasks ::: Language Identification.
With a representation that captures all phenomena in a language-neutral way, it should be difficult to determine what language the sentence is written in. Unlike other tasks, language identification does require fitting a classifier. We train a linear classifier on top of a sentence representation to try to classify the language of the sentence.
Probing Tasks ::: Language Similarity.
Experiments with POS tagging BIBREF3 suggest that similar languages tend to get similar representations on average. We quantify that observation by measuring how languages tend to cluster by the language families using V-measure over hierarchical clustering of the language centeroid BIBREF7.
Probing Tasks ::: Parallel Sentence Retrieval.
For each sentence in a multi-parallel corpus, we compute the cosine distance of its representation with representations of all sentences on the parallel side of the corpus and select the sentence with the smallest distance.
Besides the plain and centered [cls] and mean-pooled representations, we evaluate explicit projection into the “English space”. For each language, we fit a linear regression projecting the representations into English representation space using a small set of parallel sentences.
Probing Tasks ::: Word Alignment.
While sentence retrieval could be done with keyword spotting, computing bilingual alignment requires resolving detailed correspondence on the word level.
We find the word alignment as a minimum weighted edge cover of a bipartite graph. The graph connects the tokens of the sentences in the two languages and edges between them are weighted with the cosine distance of the token representation. Tokens that get split into multiple subwords are represented using the average of the embeddings of the subwords. Note that this algorithm is invariant to representation centering which would only change the edge weights by a constant offset.
We evaluate the alignment using the F$_1$ score over both sure and possible alignment links in a manually aligned gold standard.
Probing Tasks ::: MT Quality Estimation.
MT QE assesses the quality of an MT system output without having access to a reference translation.
The standard evaluation metric is the correlation with the Human-targeted Translation Error Rate which is the number of edit operations a human translator would need to do to correct the system output. This is a more challenging task than the two previous ones because it requires capturing more fine-grained differences in meaning.
We evaluate how cosine distance of the representation of the source sentence and of the MT output reflects the translation quality. In addition to plain and centered representations, we also test trained bilingual projection, and a fully supervised regression trained on training data.
Experimental Setup
We use a pre-trained mBERT model that was made public with the BERT release. The model dimension is 768, hidden layer dimension 3072, self-attention uses 12 heads, the model has 12 layers. It uses a vocabulary of 120k wordpieces that is shared for all languages.
To train the language identification classifier, for each of the BERT languages we randomly selected 110k sentences of at least 20 characters from Wikipedia, and keep 5k for validation and 5k for testing for each language. The training data are also used for estimating the language centroids.
For parallel sentence retrieval, we use a multi-parallel corpus of test data from the WMT14 evaluation campaign BIBREF8 with 3,000 sentences in Czech, English, French, German, Hindi, and Russian. The linear projection experiment uses the WMT14 development data.
We use manually annotated word alignment datasets to evaluate word alignment between English on one side and Czech (2.5k sent.; BIBREF9), Swedish (192 sent.; BIBREF10), German (508 sent.), French (447 sent.; BIBREF11) and Romanian (248 sent.; BIBREF12) on the other side. We compare the results with FastAlign BIBREF13 that was provided with 1M additional parallel sentences from ParaCrawl BIBREF14 in addition to the test data.
For MT QE, we use English-German data provided for the WMT19 QE Shared Task BIBREF15 consisting training and test data with source senteces, their automatic translations, and manually corrections.
Results ::: Language Identification.
Table TABREF7 shows that centering the sentence representations considerably decreases the accuracy of language identification, especially in the case of mean-pooled embeddings. This indicates that the proposed centering procedure does indeed remove the language-specific information to a great extent.
Results ::: Language Similarity.
Figure FIGREF9 is a tSNE plot BIBREF16 of the language centroids, showing that the similarity of the centroids tends to correspond to the similarity of the languages. Table TABREF10 confirms that the hierarchical clustering of the language centroids mostly corresponds to the language families.
Results ::: Parallel Sentence Retrieval.
Results in Table TABREF12 reveal that the representation centering dramatically improves the retrieval accuracy, showing that it makes the representations more language-neutral. However, an explicitly learned projection of the representations leads to a much greater improvement, reaching a close-to-perfect accuracy, even though the projection was fitted on relatively small parallel data. The accuracy is higher for mean-pooled states than for the [cls] embedding and varies according to the layer of mBERT used (see Figure FIGREF13).
Results ::: Word Alignment.
Table TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance.
Results ::: MT Quality Estimation.
Qualitative results of MT QE are tabulated in Table TABREF18. Unlike sentence retrieval, QE is more sensitive to subtle differences between sentences. Measuring the distance of the non-centered sentence vectors does not correlate with translation quality at all. Centering or explicit projection only leads to a mild correlation, much lower than a supervisedly trained regression;and even better performance is possible BIBREF15. The results show that the linear projection between the representations only captures a rough semantic correspondence, which does not seem to be sufficient for QE, where the most indicative feature appears to be sentence complexity.
Fine-tuning mBERT
We also considered model fine-tuning towards stronger language neutrality. We evaluate two fine-tuned versions of mBERT: UDify, tuned for a multi-lingual dependency parser, and lng-free, tuned to jettison the language-specific information from the representations.
Fine-tuning mBERT ::: UDify
The UDify model BIBREF1 uses mBERT to train a single model for dependency parsing and morphological analysis of 75 languages. During the parser training, mBERT is fine-tuned, which improves the parser accuracy. Results on zero-shot parsing suggest that the fine-tuning leads to more cross-lingual representations with respect to morphology and syntax.
However, our analyses show that fine-tuning mBERT for multilingual dependency parsing does not remove the language identity information from the representations and actually makes the representations less semantically cross-lingual.
Fine-tuning mBERT ::: lng-free
In this experiment, we try to make the representations more language-neutral by removing the language identity from the model using an adversarial approach. We continue training mBERT in a multi-task learning setup with the masked LM objective with the same sampling procedure BIBREF0 jointly with adversarial language ID classifiers BIBREF17. For each layer, we train one classifier for the [cls] token and one for the mean-pooled hidden states with the gradient reversal layer BIBREF18 between mBERT and the classifier.
The results reveal that the adversarial removal of language information succeeds in dramatically decreasing the accuracy of the language identification classifier; the effect is strongest in deeper layers for which the standard mBERT tend to perform better (see Figure FIGREF22). However, other tasksare not affected by the adversarial fine-tuning.
Conclusions
Using a set of semantically oriented tasks that require explicit semantic cross-lingual representations, we showed that mBERT contextual embeddings do not represent similar semantic phenomena similarly and therefore they are not directly usable for zero-shot cross-lingual tasks.
Contextual embeddings of mBERT capture similarities between languages and cluster the languages by their families. Neither cross-lingual fine-tuning nor adversarial language identity removal breaks this property. A part of language information is encoded by the position in the embedding space, thus a certain degree of cross-linguality can be achieved by centering the representations for each language. Exploiting this property allows a good cross-lingual sentence retrieval performance and bilingual word alignment (which is invariant to the shift). A good cross-lingual representation can be achieved by fitting a supervised projection on a small parallel corpus. | Table TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance. |
66125cfdf11d3bf8e59728428e02021177142c3a | 66125cfdf11d3bf8e59728428e02021177142c3a_1 | Q: How they demonstrate that language-neutral component is sufficiently general in terms of modeling semantics to allow high-accuracy word-alignment?
Text: Introduction
Multilingual BERT (mBERT; BIBREF0) is gaining popularity as a contextual representation for various multilingual tasks, such as dependency parsing BIBREF1, BIBREF2, cross-lingual natural language inference (XNLI) or named-entity recognition (NER) BIBREF3, BIBREF4, BIBREF5.
BIBREF3 present an exploratory paper showing that mBERT can be used cross-lingually for zero-shot transfer in morphological and syntactic tasks, at least for typologically similar languages. They also study an interesting semantic task, sentence-retrieval, with promising initial results. Their work leaves many open questions in terms of how good the cross-lingual mBERT representation is for semantics, motivating our work.
In this paper, we directly assess the semantic cross-lingual properties of mBERT. To avoid methodological issues with zero-shot transfer (possible language overfitting, hyper-parameter tuning), we selected tasks that only involve a direct comparison of the representations: cross-lingual sentence retrieval, word alignment, and machine translation quality estimation (MT QE). Additionally, we explore how the language is represented in the embeddings by training language identification classifiers and assessing how the representation similarity corresponds to phylogenetic language families.
Our results show that the mBERT representations, even after language-agnostic fine-tuning, are not very language-neutral. However, the identity of the language can be approximated as a constant shift in the representation space. An even higher language-neutrality can still be achieved by a linear projection fitted on a small amount of parallel data.
Finally, we present attempts to strengthen the language-neutral component via fine-tuning: first, for multi-lingual syntactic and morphological analysis; second, towards language identity removal via a adversarial classifier.
Related Work
Since the publication of mBERT BIBREF0, many positive experimental results were published.
BIBREF2 reached impressive results in zero-shot dependency parsing. However, the representation used for the parser was a bilingual projection of the contextual embeddings based on word-alignment trained on parallel data.
BIBREF3 recently examined the cross-lingual properties of mBERT on zero-shot NER and part-of-speech (POS) tagging but the success of zero-shot transfer strongly depends on how typologically similar the languages are. Similarly, BIBREF4 trained good multilingual models for POS tagging, NER, and XNLI, but struggled to achieve good results in the zero-shot setup.
BIBREF3 assessed mBERT on cross-lingual sentence retrieval between three language pairs. They observed that if they subtract the average difference between the embeddings from the target language representation, the retrieval accuracy significantly increases. We systematically study this idea in the later sections.
Many experiments show BIBREF4, BIBREF5, BIBREF1 that downstream task models can extract relevant features from the multilingual representations. But these results do not directly show language-neutrality, i.e., to what extent are similar phenomena are represented similarly across languages. The models can obtain the task-specific information based on the knowledge of the language, which (as we show later) can be easily identified. Our choice of evaluation tasks eliminates this risk by directly comparing the representations. Limited success in zero-shot setups and the need for explicit bilingual projection in order to work well BIBREF3, BIBREF4, BIBREF6 also shows limited language neutrality of mBERT.
Centering mBERT Representations
Following BIBREF3, we hypothesize that a sentence representation in mBERT is composed of a language-specific component, which identifies the language of the sentence, and a language-neutral component, which captures the meaning of the sentence in a language-independent way. We assume that the language-specific component is similar across all sentences in the language.
We thus try to remove the language-specific information from the representations by centering the representations of sentences in each language so that their average lies at the origin of the vector space. We do this by estimating the language centroid as the mean of the mBERT representations for a set of sentences in that language and subtracting the language centroid from the contextual embeddings.
We then analyze the semantic properties of both the original and the centered representations using a range of probing tasks. For all tasks, we test all layers of the model. For tasks utilizing a single-vector sentence representation, we test both the vector corresponding to the [cls] token and mean-pooled states.
Probing Tasks
We employ five probing tasks to evaluate the language neutrality of the representations.
Probing Tasks ::: Language Identification.
With a representation that captures all phenomena in a language-neutral way, it should be difficult to determine what language the sentence is written in. Unlike other tasks, language identification does require fitting a classifier. We train a linear classifier on top of a sentence representation to try to classify the language of the sentence.
Probing Tasks ::: Language Similarity.
Experiments with POS tagging BIBREF3 suggest that similar languages tend to get similar representations on average. We quantify that observation by measuring how languages tend to cluster by the language families using V-measure over hierarchical clustering of the language centeroid BIBREF7.
Probing Tasks ::: Parallel Sentence Retrieval.
For each sentence in a multi-parallel corpus, we compute the cosine distance of its representation with representations of all sentences on the parallel side of the corpus and select the sentence with the smallest distance.
Besides the plain and centered [cls] and mean-pooled representations, we evaluate explicit projection into the “English space”. For each language, we fit a linear regression projecting the representations into English representation space using a small set of parallel sentences.
Probing Tasks ::: Word Alignment.
While sentence retrieval could be done with keyword spotting, computing bilingual alignment requires resolving detailed correspondence on the word level.
We find the word alignment as a minimum weighted edge cover of a bipartite graph. The graph connects the tokens of the sentences in the two languages and edges between them are weighted with the cosine distance of the token representation. Tokens that get split into multiple subwords are represented using the average of the embeddings of the subwords. Note that this algorithm is invariant to representation centering which would only change the edge weights by a constant offset.
We evaluate the alignment using the F$_1$ score over both sure and possible alignment links in a manually aligned gold standard.
Probing Tasks ::: MT Quality Estimation.
MT QE assesses the quality of an MT system output without having access to a reference translation.
The standard evaluation metric is the correlation with the Human-targeted Translation Error Rate which is the number of edit operations a human translator would need to do to correct the system output. This is a more challenging task than the two previous ones because it requires capturing more fine-grained differences in meaning.
We evaluate how cosine distance of the representation of the source sentence and of the MT output reflects the translation quality. In addition to plain and centered representations, we also test trained bilingual projection, and a fully supervised regression trained on training data.
Experimental Setup
We use a pre-trained mBERT model that was made public with the BERT release. The model dimension is 768, hidden layer dimension 3072, self-attention uses 12 heads, the model has 12 layers. It uses a vocabulary of 120k wordpieces that is shared for all languages.
To train the language identification classifier, for each of the BERT languages we randomly selected 110k sentences of at least 20 characters from Wikipedia, and keep 5k for validation and 5k for testing for each language. The training data are also used for estimating the language centroids.
For parallel sentence retrieval, we use a multi-parallel corpus of test data from the WMT14 evaluation campaign BIBREF8 with 3,000 sentences in Czech, English, French, German, Hindi, and Russian. The linear projection experiment uses the WMT14 development data.
We use manually annotated word alignment datasets to evaluate word alignment between English on one side and Czech (2.5k sent.; BIBREF9), Swedish (192 sent.; BIBREF10), German (508 sent.), French (447 sent.; BIBREF11) and Romanian (248 sent.; BIBREF12) on the other side. We compare the results with FastAlign BIBREF13 that was provided with 1M additional parallel sentences from ParaCrawl BIBREF14 in addition to the test data.
For MT QE, we use English-German data provided for the WMT19 QE Shared Task BIBREF15 consisting training and test data with source senteces, their automatic translations, and manually corrections.
Results ::: Language Identification.
Table TABREF7 shows that centering the sentence representations considerably decreases the accuracy of language identification, especially in the case of mean-pooled embeddings. This indicates that the proposed centering procedure does indeed remove the language-specific information to a great extent.
Results ::: Language Similarity.
Figure FIGREF9 is a tSNE plot BIBREF16 of the language centroids, showing that the similarity of the centroids tends to correspond to the similarity of the languages. Table TABREF10 confirms that the hierarchical clustering of the language centroids mostly corresponds to the language families.
Results ::: Parallel Sentence Retrieval.
Results in Table TABREF12 reveal that the representation centering dramatically improves the retrieval accuracy, showing that it makes the representations more language-neutral. However, an explicitly learned projection of the representations leads to a much greater improvement, reaching a close-to-perfect accuracy, even though the projection was fitted on relatively small parallel data. The accuracy is higher for mean-pooled states than for the [cls] embedding and varies according to the layer of mBERT used (see Figure FIGREF13).
Results ::: Word Alignment.
Table TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance.
Results ::: MT Quality Estimation.
Qualitative results of MT QE are tabulated in Table TABREF18. Unlike sentence retrieval, QE is more sensitive to subtle differences between sentences. Measuring the distance of the non-centered sentence vectors does not correlate with translation quality at all. Centering or explicit projection only leads to a mild correlation, much lower than a supervisedly trained regression;and even better performance is possible BIBREF15. The results show that the linear projection between the representations only captures a rough semantic correspondence, which does not seem to be sufficient for QE, where the most indicative feature appears to be sentence complexity.
Fine-tuning mBERT
We also considered model fine-tuning towards stronger language neutrality. We evaluate two fine-tuned versions of mBERT: UDify, tuned for a multi-lingual dependency parser, and lng-free, tuned to jettison the language-specific information from the representations.
Fine-tuning mBERT ::: UDify
The UDify model BIBREF1 uses mBERT to train a single model for dependency parsing and morphological analysis of 75 languages. During the parser training, mBERT is fine-tuned, which improves the parser accuracy. Results on zero-shot parsing suggest that the fine-tuning leads to more cross-lingual representations with respect to morphology and syntax.
However, our analyses show that fine-tuning mBERT for multilingual dependency parsing does not remove the language identity information from the representations and actually makes the representations less semantically cross-lingual.
Fine-tuning mBERT ::: lng-free
In this experiment, we try to make the representations more language-neutral by removing the language identity from the model using an adversarial approach. We continue training mBERT in a multi-task learning setup with the masked LM objective with the same sampling procedure BIBREF0 jointly with adversarial language ID classifiers BIBREF17. For each layer, we train one classifier for the [cls] token and one for the mean-pooled hidden states with the gradient reversal layer BIBREF18 between mBERT and the classifier.
The results reveal that the adversarial removal of language information succeeds in dramatically decreasing the accuracy of the language identification classifier; the effect is strongest in deeper layers for which the standard mBERT tend to perform better (see Figure FIGREF22). However, other tasksare not affected by the adversarial fine-tuning.
Conclusions
Using a set of semantically oriented tasks that require explicit semantic cross-lingual representations, we showed that mBERT contextual embeddings do not represent similar semantic phenomena similarly and therefore they are not directly usable for zero-shot cross-lingual tasks.
Contextual embeddings of mBERT capture similarities between languages and cluster the languages by their families. Neither cross-lingual fine-tuning nor adversarial language identity removal breaks this property. A part of language information is encoded by the position in the embedding space, thus a certain degree of cross-linguality can be achieved by centering the representations for each language. Exploiting this property allows a good cross-lingual sentence retrieval performance and bilingual word alignment (which is invariant to the shift). A good cross-lingual representation can be achieved by fitting a supervised projection on a small parallel corpus. | explicit projection had a negligible effect on the performance |
222b2469eede9a0448e0226c6c742e8c91522af3 | 222b2469eede9a0448e0226c6c742e8c91522af3_0 | Q: Are language-specific and language-neutral components disjunctive?
Text: Introduction
Multilingual BERT (mBERT; BIBREF0) is gaining popularity as a contextual representation for various multilingual tasks, such as dependency parsing BIBREF1, BIBREF2, cross-lingual natural language inference (XNLI) or named-entity recognition (NER) BIBREF3, BIBREF4, BIBREF5.
BIBREF3 present an exploratory paper showing that mBERT can be used cross-lingually for zero-shot transfer in morphological and syntactic tasks, at least for typologically similar languages. They also study an interesting semantic task, sentence-retrieval, with promising initial results. Their work leaves many open questions in terms of how good the cross-lingual mBERT representation is for semantics, motivating our work.
In this paper, we directly assess the semantic cross-lingual properties of mBERT. To avoid methodological issues with zero-shot transfer (possible language overfitting, hyper-parameter tuning), we selected tasks that only involve a direct comparison of the representations: cross-lingual sentence retrieval, word alignment, and machine translation quality estimation (MT QE). Additionally, we explore how the language is represented in the embeddings by training language identification classifiers and assessing how the representation similarity corresponds to phylogenetic language families.
Our results show that the mBERT representations, even after language-agnostic fine-tuning, are not very language-neutral. However, the identity of the language can be approximated as a constant shift in the representation space. An even higher language-neutrality can still be achieved by a linear projection fitted on a small amount of parallel data.
Finally, we present attempts to strengthen the language-neutral component via fine-tuning: first, for multi-lingual syntactic and morphological analysis; second, towards language identity removal via a adversarial classifier.
Related Work
Since the publication of mBERT BIBREF0, many positive experimental results were published.
BIBREF2 reached impressive results in zero-shot dependency parsing. However, the representation used for the parser was a bilingual projection of the contextual embeddings based on word-alignment trained on parallel data.
BIBREF3 recently examined the cross-lingual properties of mBERT on zero-shot NER and part-of-speech (POS) tagging but the success of zero-shot transfer strongly depends on how typologically similar the languages are. Similarly, BIBREF4 trained good multilingual models for POS tagging, NER, and XNLI, but struggled to achieve good results in the zero-shot setup.
BIBREF3 assessed mBERT on cross-lingual sentence retrieval between three language pairs. They observed that if they subtract the average difference between the embeddings from the target language representation, the retrieval accuracy significantly increases. We systematically study this idea in the later sections.
Many experiments show BIBREF4, BIBREF5, BIBREF1 that downstream task models can extract relevant features from the multilingual representations. But these results do not directly show language-neutrality, i.e., to what extent are similar phenomena are represented similarly across languages. The models can obtain the task-specific information based on the knowledge of the language, which (as we show later) can be easily identified. Our choice of evaluation tasks eliminates this risk by directly comparing the representations. Limited success in zero-shot setups and the need for explicit bilingual projection in order to work well BIBREF3, BIBREF4, BIBREF6 also shows limited language neutrality of mBERT.
Centering mBERT Representations
Following BIBREF3, we hypothesize that a sentence representation in mBERT is composed of a language-specific component, which identifies the language of the sentence, and a language-neutral component, which captures the meaning of the sentence in a language-independent way. We assume that the language-specific component is similar across all sentences in the language.
We thus try to remove the language-specific information from the representations by centering the representations of sentences in each language so that their average lies at the origin of the vector space. We do this by estimating the language centroid as the mean of the mBERT representations for a set of sentences in that language and subtracting the language centroid from the contextual embeddings.
We then analyze the semantic properties of both the original and the centered representations using a range of probing tasks. For all tasks, we test all layers of the model. For tasks utilizing a single-vector sentence representation, we test both the vector corresponding to the [cls] token and mean-pooled states.
Probing Tasks
We employ five probing tasks to evaluate the language neutrality of the representations.
Probing Tasks ::: Language Identification.
With a representation that captures all phenomena in a language-neutral way, it should be difficult to determine what language the sentence is written in. Unlike other tasks, language identification does require fitting a classifier. We train a linear classifier on top of a sentence representation to try to classify the language of the sentence.
Probing Tasks ::: Language Similarity.
Experiments with POS tagging BIBREF3 suggest that similar languages tend to get similar representations on average. We quantify that observation by measuring how languages tend to cluster by the language families using V-measure over hierarchical clustering of the language centeroid BIBREF7.
Probing Tasks ::: Parallel Sentence Retrieval.
For each sentence in a multi-parallel corpus, we compute the cosine distance of its representation with representations of all sentences on the parallel side of the corpus and select the sentence with the smallest distance.
Besides the plain and centered [cls] and mean-pooled representations, we evaluate explicit projection into the “English space”. For each language, we fit a linear regression projecting the representations into English representation space using a small set of parallel sentences.
Probing Tasks ::: Word Alignment.
While sentence retrieval could be done with keyword spotting, computing bilingual alignment requires resolving detailed correspondence on the word level.
We find the word alignment as a minimum weighted edge cover of a bipartite graph. The graph connects the tokens of the sentences in the two languages and edges between them are weighted with the cosine distance of the token representation. Tokens that get split into multiple subwords are represented using the average of the embeddings of the subwords. Note that this algorithm is invariant to representation centering which would only change the edge weights by a constant offset.
We evaluate the alignment using the F$_1$ score over both sure and possible alignment links in a manually aligned gold standard.
Probing Tasks ::: MT Quality Estimation.
MT QE assesses the quality of an MT system output without having access to a reference translation.
The standard evaluation metric is the correlation with the Human-targeted Translation Error Rate which is the number of edit operations a human translator would need to do to correct the system output. This is a more challenging task than the two previous ones because it requires capturing more fine-grained differences in meaning.
We evaluate how cosine distance of the representation of the source sentence and of the MT output reflects the translation quality. In addition to plain and centered representations, we also test trained bilingual projection, and a fully supervised regression trained on training data.
Experimental Setup
We use a pre-trained mBERT model that was made public with the BERT release. The model dimension is 768, hidden layer dimension 3072, self-attention uses 12 heads, the model has 12 layers. It uses a vocabulary of 120k wordpieces that is shared for all languages.
To train the language identification classifier, for each of the BERT languages we randomly selected 110k sentences of at least 20 characters from Wikipedia, and keep 5k for validation and 5k for testing for each language. The training data are also used for estimating the language centroids.
For parallel sentence retrieval, we use a multi-parallel corpus of test data from the WMT14 evaluation campaign BIBREF8 with 3,000 sentences in Czech, English, French, German, Hindi, and Russian. The linear projection experiment uses the WMT14 development data.
We use manually annotated word alignment datasets to evaluate word alignment between English on one side and Czech (2.5k sent.; BIBREF9), Swedish (192 sent.; BIBREF10), German (508 sent.), French (447 sent.; BIBREF11) and Romanian (248 sent.; BIBREF12) on the other side. We compare the results with FastAlign BIBREF13 that was provided with 1M additional parallel sentences from ParaCrawl BIBREF14 in addition to the test data.
For MT QE, we use English-German data provided for the WMT19 QE Shared Task BIBREF15 consisting training and test data with source senteces, their automatic translations, and manually corrections.
Results ::: Language Identification.
Table TABREF7 shows that centering the sentence representations considerably decreases the accuracy of language identification, especially in the case of mean-pooled embeddings. This indicates that the proposed centering procedure does indeed remove the language-specific information to a great extent.
Results ::: Language Similarity.
Figure FIGREF9 is a tSNE plot BIBREF16 of the language centroids, showing that the similarity of the centroids tends to correspond to the similarity of the languages. Table TABREF10 confirms that the hierarchical clustering of the language centroids mostly corresponds to the language families.
Results ::: Parallel Sentence Retrieval.
Results in Table TABREF12 reveal that the representation centering dramatically improves the retrieval accuracy, showing that it makes the representations more language-neutral. However, an explicitly learned projection of the representations leads to a much greater improvement, reaching a close-to-perfect accuracy, even though the projection was fitted on relatively small parallel data. The accuracy is higher for mean-pooled states than for the [cls] embedding and varies according to the layer of mBERT used (see Figure FIGREF13).
Results ::: Word Alignment.
Table TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance.
Results ::: MT Quality Estimation.
Qualitative results of MT QE are tabulated in Table TABREF18. Unlike sentence retrieval, QE is more sensitive to subtle differences between sentences. Measuring the distance of the non-centered sentence vectors does not correlate with translation quality at all. Centering or explicit projection only leads to a mild correlation, much lower than a supervisedly trained regression;and even better performance is possible BIBREF15. The results show that the linear projection between the representations only captures a rough semantic correspondence, which does not seem to be sufficient for QE, where the most indicative feature appears to be sentence complexity.
Fine-tuning mBERT
We also considered model fine-tuning towards stronger language neutrality. We evaluate two fine-tuned versions of mBERT: UDify, tuned for a multi-lingual dependency parser, and lng-free, tuned to jettison the language-specific information from the representations.
Fine-tuning mBERT ::: UDify
The UDify model BIBREF1 uses mBERT to train a single model for dependency parsing and morphological analysis of 75 languages. During the parser training, mBERT is fine-tuned, which improves the parser accuracy. Results on zero-shot parsing suggest that the fine-tuning leads to more cross-lingual representations with respect to morphology and syntax.
However, our analyses show that fine-tuning mBERT for multilingual dependency parsing does not remove the language identity information from the representations and actually makes the representations less semantically cross-lingual.
Fine-tuning mBERT ::: lng-free
In this experiment, we try to make the representations more language-neutral by removing the language identity from the model using an adversarial approach. We continue training mBERT in a multi-task learning setup with the masked LM objective with the same sampling procedure BIBREF0 jointly with adversarial language ID classifiers BIBREF17. For each layer, we train one classifier for the [cls] token and one for the mean-pooled hidden states with the gradient reversal layer BIBREF18 between mBERT and the classifier.
The results reveal that the adversarial removal of language information succeeds in dramatically decreasing the accuracy of the language identification classifier; the effect is strongest in deeper layers for which the standard mBERT tend to perform better (see Figure FIGREF22). However, other tasksare not affected by the adversarial fine-tuning.
Conclusions
Using a set of semantically oriented tasks that require explicit semantic cross-lingual representations, we showed that mBERT contextual embeddings do not represent similar semantic phenomena similarly and therefore they are not directly usable for zero-shot cross-lingual tasks.
Contextual embeddings of mBERT capture similarities between languages and cluster the languages by their families. Neither cross-lingual fine-tuning nor adversarial language identity removal breaks this property. A part of language information is encoded by the position in the embedding space, thus a certain degree of cross-linguality can be achieved by centering the representations for each language. Exploiting this property allows a good cross-lingual sentence retrieval performance and bilingual word alignment (which is invariant to the shift). A good cross-lingual representation can be achieved by fitting a supervised projection on a small parallel corpus. | No |
6f8386ad64dce3a20bc75165c5c7591df8f419cf | 6f8386ad64dce3a20bc75165c5c7591df8f419cf_0 | Q: How they show that mBERT representations can be split into a language-specific component and a language-neutral component?
Text: Introduction
Multilingual BERT (mBERT; BIBREF0) is gaining popularity as a contextual representation for various multilingual tasks, such as dependency parsing BIBREF1, BIBREF2, cross-lingual natural language inference (XNLI) or named-entity recognition (NER) BIBREF3, BIBREF4, BIBREF5.
BIBREF3 present an exploratory paper showing that mBERT can be used cross-lingually for zero-shot transfer in morphological and syntactic tasks, at least for typologically similar languages. They also study an interesting semantic task, sentence-retrieval, with promising initial results. Their work leaves many open questions in terms of how good the cross-lingual mBERT representation is for semantics, motivating our work.
In this paper, we directly assess the semantic cross-lingual properties of mBERT. To avoid methodological issues with zero-shot transfer (possible language overfitting, hyper-parameter tuning), we selected tasks that only involve a direct comparison of the representations: cross-lingual sentence retrieval, word alignment, and machine translation quality estimation (MT QE). Additionally, we explore how the language is represented in the embeddings by training language identification classifiers and assessing how the representation similarity corresponds to phylogenetic language families.
Our results show that the mBERT representations, even after language-agnostic fine-tuning, are not very language-neutral. However, the identity of the language can be approximated as a constant shift in the representation space. An even higher language-neutrality can still be achieved by a linear projection fitted on a small amount of parallel data.
Finally, we present attempts to strengthen the language-neutral component via fine-tuning: first, for multi-lingual syntactic and morphological analysis; second, towards language identity removal via a adversarial classifier.
Related Work
Since the publication of mBERT BIBREF0, many positive experimental results were published.
BIBREF2 reached impressive results in zero-shot dependency parsing. However, the representation used for the parser was a bilingual projection of the contextual embeddings based on word-alignment trained on parallel data.
BIBREF3 recently examined the cross-lingual properties of mBERT on zero-shot NER and part-of-speech (POS) tagging but the success of zero-shot transfer strongly depends on how typologically similar the languages are. Similarly, BIBREF4 trained good multilingual models for POS tagging, NER, and XNLI, but struggled to achieve good results in the zero-shot setup.
BIBREF3 assessed mBERT on cross-lingual sentence retrieval between three language pairs. They observed that if they subtract the average difference between the embeddings from the target language representation, the retrieval accuracy significantly increases. We systematically study this idea in the later sections.
Many experiments show BIBREF4, BIBREF5, BIBREF1 that downstream task models can extract relevant features from the multilingual representations. But these results do not directly show language-neutrality, i.e., to what extent are similar phenomena are represented similarly across languages. The models can obtain the task-specific information based on the knowledge of the language, which (as we show later) can be easily identified. Our choice of evaluation tasks eliminates this risk by directly comparing the representations. Limited success in zero-shot setups and the need for explicit bilingual projection in order to work well BIBREF3, BIBREF4, BIBREF6 also shows limited language neutrality of mBERT.
Centering mBERT Representations
Following BIBREF3, we hypothesize that a sentence representation in mBERT is composed of a language-specific component, which identifies the language of the sentence, and a language-neutral component, which captures the meaning of the sentence in a language-independent way. We assume that the language-specific component is similar across all sentences in the language.
We thus try to remove the language-specific information from the representations by centering the representations of sentences in each language so that their average lies at the origin of the vector space. We do this by estimating the language centroid as the mean of the mBERT representations for a set of sentences in that language and subtracting the language centroid from the contextual embeddings.
We then analyze the semantic properties of both the original and the centered representations using a range of probing tasks. For all tasks, we test all layers of the model. For tasks utilizing a single-vector sentence representation, we test both the vector corresponding to the [cls] token and mean-pooled states.
Probing Tasks
We employ five probing tasks to evaluate the language neutrality of the representations.
Probing Tasks ::: Language Identification.
With a representation that captures all phenomena in a language-neutral way, it should be difficult to determine what language the sentence is written in. Unlike other tasks, language identification does require fitting a classifier. We train a linear classifier on top of a sentence representation to try to classify the language of the sentence.
Probing Tasks ::: Language Similarity.
Experiments with POS tagging BIBREF3 suggest that similar languages tend to get similar representations on average. We quantify that observation by measuring how languages tend to cluster by the language families using V-measure over hierarchical clustering of the language centeroid BIBREF7.
Probing Tasks ::: Parallel Sentence Retrieval.
For each sentence in a multi-parallel corpus, we compute the cosine distance of its representation with representations of all sentences on the parallel side of the corpus and select the sentence with the smallest distance.
Besides the plain and centered [cls] and mean-pooled representations, we evaluate explicit projection into the “English space”. For each language, we fit a linear regression projecting the representations into English representation space using a small set of parallel sentences.
Probing Tasks ::: Word Alignment.
While sentence retrieval could be done with keyword spotting, computing bilingual alignment requires resolving detailed correspondence on the word level.
We find the word alignment as a minimum weighted edge cover of a bipartite graph. The graph connects the tokens of the sentences in the two languages and edges between them are weighted with the cosine distance of the token representation. Tokens that get split into multiple subwords are represented using the average of the embeddings of the subwords. Note that this algorithm is invariant to representation centering which would only change the edge weights by a constant offset.
We evaluate the alignment using the F$_1$ score over both sure and possible alignment links in a manually aligned gold standard.
Probing Tasks ::: MT Quality Estimation.
MT QE assesses the quality of an MT system output without having access to a reference translation.
The standard evaluation metric is the correlation with the Human-targeted Translation Error Rate which is the number of edit operations a human translator would need to do to correct the system output. This is a more challenging task than the two previous ones because it requires capturing more fine-grained differences in meaning.
We evaluate how cosine distance of the representation of the source sentence and of the MT output reflects the translation quality. In addition to plain and centered representations, we also test trained bilingual projection, and a fully supervised regression trained on training data.
Experimental Setup
We use a pre-trained mBERT model that was made public with the BERT release. The model dimension is 768, hidden layer dimension 3072, self-attention uses 12 heads, the model has 12 layers. It uses a vocabulary of 120k wordpieces that is shared for all languages.
To train the language identification classifier, for each of the BERT languages we randomly selected 110k sentences of at least 20 characters from Wikipedia, and keep 5k for validation and 5k for testing for each language. The training data are also used for estimating the language centroids.
For parallel sentence retrieval, we use a multi-parallel corpus of test data from the WMT14 evaluation campaign BIBREF8 with 3,000 sentences in Czech, English, French, German, Hindi, and Russian. The linear projection experiment uses the WMT14 development data.
We use manually annotated word alignment datasets to evaluate word alignment between English on one side and Czech (2.5k sent.; BIBREF9), Swedish (192 sent.; BIBREF10), German (508 sent.), French (447 sent.; BIBREF11) and Romanian (248 sent.; BIBREF12) on the other side. We compare the results with FastAlign BIBREF13 that was provided with 1M additional parallel sentences from ParaCrawl BIBREF14 in addition to the test data.
For MT QE, we use English-German data provided for the WMT19 QE Shared Task BIBREF15 consisting training and test data with source senteces, their automatic translations, and manually corrections.
Results ::: Language Identification.
Table TABREF7 shows that centering the sentence representations considerably decreases the accuracy of language identification, especially in the case of mean-pooled embeddings. This indicates that the proposed centering procedure does indeed remove the language-specific information to a great extent.
Results ::: Language Similarity.
Figure FIGREF9 is a tSNE plot BIBREF16 of the language centroids, showing that the similarity of the centroids tends to correspond to the similarity of the languages. Table TABREF10 confirms that the hierarchical clustering of the language centroids mostly corresponds to the language families.
Results ::: Parallel Sentence Retrieval.
Results in Table TABREF12 reveal that the representation centering dramatically improves the retrieval accuracy, showing that it makes the representations more language-neutral. However, an explicitly learned projection of the representations leads to a much greater improvement, reaching a close-to-perfect accuracy, even though the projection was fitted on relatively small parallel data. The accuracy is higher for mean-pooled states than for the [cls] embedding and varies according to the layer of mBERT used (see Figure FIGREF13).
Results ::: Word Alignment.
Table TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance.
Results ::: MT Quality Estimation.
Qualitative results of MT QE are tabulated in Table TABREF18. Unlike sentence retrieval, QE is more sensitive to subtle differences between sentences. Measuring the distance of the non-centered sentence vectors does not correlate with translation quality at all. Centering or explicit projection only leads to a mild correlation, much lower than a supervisedly trained regression;and even better performance is possible BIBREF15. The results show that the linear projection between the representations only captures a rough semantic correspondence, which does not seem to be sufficient for QE, where the most indicative feature appears to be sentence complexity.
Fine-tuning mBERT
We also considered model fine-tuning towards stronger language neutrality. We evaluate two fine-tuned versions of mBERT: UDify, tuned for a multi-lingual dependency parser, and lng-free, tuned to jettison the language-specific information from the representations.
Fine-tuning mBERT ::: UDify
The UDify model BIBREF1 uses mBERT to train a single model for dependency parsing and morphological analysis of 75 languages. During the parser training, mBERT is fine-tuned, which improves the parser accuracy. Results on zero-shot parsing suggest that the fine-tuning leads to more cross-lingual representations with respect to morphology and syntax.
However, our analyses show that fine-tuning mBERT for multilingual dependency parsing does not remove the language identity information from the representations and actually makes the representations less semantically cross-lingual.
Fine-tuning mBERT ::: lng-free
In this experiment, we try to make the representations more language-neutral by removing the language identity from the model using an adversarial approach. We continue training mBERT in a multi-task learning setup with the masked LM objective with the same sampling procedure BIBREF0 jointly with adversarial language ID classifiers BIBREF17. For each layer, we train one classifier for the [cls] token and one for the mean-pooled hidden states with the gradient reversal layer BIBREF18 between mBERT and the classifier.
The results reveal that the adversarial removal of language information succeeds in dramatically decreasing the accuracy of the language identification classifier; the effect is strongest in deeper layers for which the standard mBERT tend to perform better (see Figure FIGREF22). However, other tasksare not affected by the adversarial fine-tuning.
Conclusions
Using a set of semantically oriented tasks that require explicit semantic cross-lingual representations, we showed that mBERT contextual embeddings do not represent similar semantic phenomena similarly and therefore they are not directly usable for zero-shot cross-lingual tasks.
Contextual embeddings of mBERT capture similarities between languages and cluster the languages by their families. Neither cross-lingual fine-tuning nor adversarial language identity removal breaks this property. A part of language information is encoded by the position in the embedding space, thus a certain degree of cross-linguality can be achieved by centering the representations for each language. Exploiting this property allows a good cross-lingual sentence retrieval performance and bilingual word alignment (which is invariant to the shift). A good cross-lingual representation can be achieved by fitting a supervised projection on a small parallel corpus. | We thus try to remove the language-specific information from the representations by centering the representations of sentences in each language so that their average lies at the origin of the vector space. |
81dc39ee6cdacf90d5f0f62134bf390a29146c65 | 81dc39ee6cdacf90d5f0f62134bf390a29146c65_0 | Q: What challenges this work presents that must be solved to build better language-neutral representations?
Text: Introduction
Multilingual BERT (mBERT; BIBREF0) is gaining popularity as a contextual representation for various multilingual tasks, such as dependency parsing BIBREF1, BIBREF2, cross-lingual natural language inference (XNLI) or named-entity recognition (NER) BIBREF3, BIBREF4, BIBREF5.
BIBREF3 present an exploratory paper showing that mBERT can be used cross-lingually for zero-shot transfer in morphological and syntactic tasks, at least for typologically similar languages. They also study an interesting semantic task, sentence-retrieval, with promising initial results. Their work leaves many open questions in terms of how good the cross-lingual mBERT representation is for semantics, motivating our work.
In this paper, we directly assess the semantic cross-lingual properties of mBERT. To avoid methodological issues with zero-shot transfer (possible language overfitting, hyper-parameter tuning), we selected tasks that only involve a direct comparison of the representations: cross-lingual sentence retrieval, word alignment, and machine translation quality estimation (MT QE). Additionally, we explore how the language is represented in the embeddings by training language identification classifiers and assessing how the representation similarity corresponds to phylogenetic language families.
Our results show that the mBERT representations, even after language-agnostic fine-tuning, are not very language-neutral. However, the identity of the language can be approximated as a constant shift in the representation space. An even higher language-neutrality can still be achieved by a linear projection fitted on a small amount of parallel data.
Finally, we present attempts to strengthen the language-neutral component via fine-tuning: first, for multi-lingual syntactic and morphological analysis; second, towards language identity removal via a adversarial classifier.
Related Work
Since the publication of mBERT BIBREF0, many positive experimental results were published.
BIBREF2 reached impressive results in zero-shot dependency parsing. However, the representation used for the parser was a bilingual projection of the contextual embeddings based on word-alignment trained on parallel data.
BIBREF3 recently examined the cross-lingual properties of mBERT on zero-shot NER and part-of-speech (POS) tagging but the success of zero-shot transfer strongly depends on how typologically similar the languages are. Similarly, BIBREF4 trained good multilingual models for POS tagging, NER, and XNLI, but struggled to achieve good results in the zero-shot setup.
BIBREF3 assessed mBERT on cross-lingual sentence retrieval between three language pairs. They observed that if they subtract the average difference between the embeddings from the target language representation, the retrieval accuracy significantly increases. We systematically study this idea in the later sections.
Many experiments show BIBREF4, BIBREF5, BIBREF1 that downstream task models can extract relevant features from the multilingual representations. But these results do not directly show language-neutrality, i.e., to what extent are similar phenomena are represented similarly across languages. The models can obtain the task-specific information based on the knowledge of the language, which (as we show later) can be easily identified. Our choice of evaluation tasks eliminates this risk by directly comparing the representations. Limited success in zero-shot setups and the need for explicit bilingual projection in order to work well BIBREF3, BIBREF4, BIBREF6 also shows limited language neutrality of mBERT.
Centering mBERT Representations
Following BIBREF3, we hypothesize that a sentence representation in mBERT is composed of a language-specific component, which identifies the language of the sentence, and a language-neutral component, which captures the meaning of the sentence in a language-independent way. We assume that the language-specific component is similar across all sentences in the language.
We thus try to remove the language-specific information from the representations by centering the representations of sentences in each language so that their average lies at the origin of the vector space. We do this by estimating the language centroid as the mean of the mBERT representations for a set of sentences in that language and subtracting the language centroid from the contextual embeddings.
We then analyze the semantic properties of both the original and the centered representations using a range of probing tasks. For all tasks, we test all layers of the model. For tasks utilizing a single-vector sentence representation, we test both the vector corresponding to the [cls] token and mean-pooled states.
Probing Tasks
We employ five probing tasks to evaluate the language neutrality of the representations.
Probing Tasks ::: Language Identification.
With a representation that captures all phenomena in a language-neutral way, it should be difficult to determine what language the sentence is written in. Unlike other tasks, language identification does require fitting a classifier. We train a linear classifier on top of a sentence representation to try to classify the language of the sentence.
Probing Tasks ::: Language Similarity.
Experiments with POS tagging BIBREF3 suggest that similar languages tend to get similar representations on average. We quantify that observation by measuring how languages tend to cluster by the language families using V-measure over hierarchical clustering of the language centeroid BIBREF7.
Probing Tasks ::: Parallel Sentence Retrieval.
For each sentence in a multi-parallel corpus, we compute the cosine distance of its representation with representations of all sentences on the parallel side of the corpus and select the sentence with the smallest distance.
Besides the plain and centered [cls] and mean-pooled representations, we evaluate explicit projection into the “English space”. For each language, we fit a linear regression projecting the representations into English representation space using a small set of parallel sentences.
Probing Tasks ::: Word Alignment.
While sentence retrieval could be done with keyword spotting, computing bilingual alignment requires resolving detailed correspondence on the word level.
We find the word alignment as a minimum weighted edge cover of a bipartite graph. The graph connects the tokens of the sentences in the two languages and edges between them are weighted with the cosine distance of the token representation. Tokens that get split into multiple subwords are represented using the average of the embeddings of the subwords. Note that this algorithm is invariant to representation centering which would only change the edge weights by a constant offset.
We evaluate the alignment using the F$_1$ score over both sure and possible alignment links in a manually aligned gold standard.
Probing Tasks ::: MT Quality Estimation.
MT QE assesses the quality of an MT system output without having access to a reference translation.
The standard evaluation metric is the correlation with the Human-targeted Translation Error Rate which is the number of edit operations a human translator would need to do to correct the system output. This is a more challenging task than the two previous ones because it requires capturing more fine-grained differences in meaning.
We evaluate how cosine distance of the representation of the source sentence and of the MT output reflects the translation quality. In addition to plain and centered representations, we also test trained bilingual projection, and a fully supervised regression trained on training data.
Experimental Setup
We use a pre-trained mBERT model that was made public with the BERT release. The model dimension is 768, hidden layer dimension 3072, self-attention uses 12 heads, the model has 12 layers. It uses a vocabulary of 120k wordpieces that is shared for all languages.
To train the language identification classifier, for each of the BERT languages we randomly selected 110k sentences of at least 20 characters from Wikipedia, and keep 5k for validation and 5k for testing for each language. The training data are also used for estimating the language centroids.
For parallel sentence retrieval, we use a multi-parallel corpus of test data from the WMT14 evaluation campaign BIBREF8 with 3,000 sentences in Czech, English, French, German, Hindi, and Russian. The linear projection experiment uses the WMT14 development data.
We use manually annotated word alignment datasets to evaluate word alignment between English on one side and Czech (2.5k sent.; BIBREF9), Swedish (192 sent.; BIBREF10), German (508 sent.), French (447 sent.; BIBREF11) and Romanian (248 sent.; BIBREF12) on the other side. We compare the results with FastAlign BIBREF13 that was provided with 1M additional parallel sentences from ParaCrawl BIBREF14 in addition to the test data.
For MT QE, we use English-German data provided for the WMT19 QE Shared Task BIBREF15 consisting training and test data with source senteces, their automatic translations, and manually corrections.
Results ::: Language Identification.
Table TABREF7 shows that centering the sentence representations considerably decreases the accuracy of language identification, especially in the case of mean-pooled embeddings. This indicates that the proposed centering procedure does indeed remove the language-specific information to a great extent.
Results ::: Language Similarity.
Figure FIGREF9 is a tSNE plot BIBREF16 of the language centroids, showing that the similarity of the centroids tends to correspond to the similarity of the languages. Table TABREF10 confirms that the hierarchical clustering of the language centroids mostly corresponds to the language families.
Results ::: Parallel Sentence Retrieval.
Results in Table TABREF12 reveal that the representation centering dramatically improves the retrieval accuracy, showing that it makes the representations more language-neutral. However, an explicitly learned projection of the representations leads to a much greater improvement, reaching a close-to-perfect accuracy, even though the projection was fitted on relatively small parallel data. The accuracy is higher for mean-pooled states than for the [cls] embedding and varies according to the layer of mBERT used (see Figure FIGREF13).
Results ::: Word Alignment.
Table TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance.
Results ::: MT Quality Estimation.
Qualitative results of MT QE are tabulated in Table TABREF18. Unlike sentence retrieval, QE is more sensitive to subtle differences between sentences. Measuring the distance of the non-centered sentence vectors does not correlate with translation quality at all. Centering or explicit projection only leads to a mild correlation, much lower than a supervisedly trained regression;and even better performance is possible BIBREF15. The results show that the linear projection between the representations only captures a rough semantic correspondence, which does not seem to be sufficient for QE, where the most indicative feature appears to be sentence complexity.
Fine-tuning mBERT
We also considered model fine-tuning towards stronger language neutrality. We evaluate two fine-tuned versions of mBERT: UDify, tuned for a multi-lingual dependency parser, and lng-free, tuned to jettison the language-specific information from the representations.
Fine-tuning mBERT ::: UDify
The UDify model BIBREF1 uses mBERT to train a single model for dependency parsing and morphological analysis of 75 languages. During the parser training, mBERT is fine-tuned, which improves the parser accuracy. Results on zero-shot parsing suggest that the fine-tuning leads to more cross-lingual representations with respect to morphology and syntax.
However, our analyses show that fine-tuning mBERT for multilingual dependency parsing does not remove the language identity information from the representations and actually makes the representations less semantically cross-lingual.
Fine-tuning mBERT ::: lng-free
In this experiment, we try to make the representations more language-neutral by removing the language identity from the model using an adversarial approach. We continue training mBERT in a multi-task learning setup with the masked LM objective with the same sampling procedure BIBREF0 jointly with adversarial language ID classifiers BIBREF17. For each layer, we train one classifier for the [cls] token and one for the mean-pooled hidden states with the gradient reversal layer BIBREF18 between mBERT and the classifier.
The results reveal that the adversarial removal of language information succeeds in dramatically decreasing the accuracy of the language identification classifier; the effect is strongest in deeper layers for which the standard mBERT tend to perform better (see Figure FIGREF22). However, other tasksare not affected by the adversarial fine-tuning.
Conclusions
Using a set of semantically oriented tasks that require explicit semantic cross-lingual representations, we showed that mBERT contextual embeddings do not represent similar semantic phenomena similarly and therefore they are not directly usable for zero-shot cross-lingual tasks.
Contextual embeddings of mBERT capture similarities between languages and cluster the languages by their families. Neither cross-lingual fine-tuning nor adversarial language identity removal breaks this property. A part of language information is encoded by the position in the embedding space, thus a certain degree of cross-linguality can be achieved by centering the representations for each language. Exploiting this property allows a good cross-lingual sentence retrieval performance and bilingual word alignment (which is invariant to the shift). A good cross-lingual representation can be achieved by fitting a supervised projection on a small parallel corpus. | contextual embeddings do not represent similar semantic phenomena similarly and therefore they are not directly usable for zero-shot cross-lingual tasks |
eeaceee98ef1f6c971dac7b0b8930ee8060d71c2 | eeaceee98ef1f6c971dac7b0b8930ee8060d71c2_0 | Q: What approaches they propose?
Text: Introduction
Fueled by recent advances in deep-learning and language processing, NLP systems are increasingly being used for prediction and decision-making in many fields BIBREF0, including sensitive ones such as health, commerce and law BIBREF1. Unfortunately, these highly flexible and highly effective neural models are also opaque. There is therefore a critical need for explaining learning-based models' decisions.
The emerging research topic of interpretability or explainability has grown rapidly in recent years. Unfortunately, not without growing pains.
One such pain is the challenge of defining—and evaluating—what constitutes a quality interpretation. Current approaches define interpretation in a rather ad-hoc manner, motivated by practical use-cases and applications. However, this view often fails to distinguish between distinct aspects of the interpretation's quality, such as readability, plausibility and faithfulness BIBREF2. We argue (§SECREF2, §SECREF5) such conflation is harmful, and that faithfulness should be defined and evaluated explicitly, and independently from plausibility.
Our main focus is the evaluation of the faithfulness of an explanation. Intuitively, a faithful interpretation is one that accurately represents the reasoning process behind the model's prediction. We find this to be a pressing issue in explainability: in cases where an explanation is required to be faithful, imperfect or misleading evaluation can have disastrous effects.
While literature in this area may implicitly or explicitly evaluate faithfulness for specific explanation techniques, there is no consistent and formal definition of faithfulness. We uncover three assumptions that underlie all these attempts. By making the assumptions explicit and organizing the literature around them, we “connect the dots” between seemingly distinct evaluation methods, and also provide a basis for discussion regarding the desirable properties of faithfulness (§SECREF6).
Finally, we observe a trend by which faithfulness is treated as a binary property, followed by showing that an interpretation method is not faithful. We claim that this is unproductive (§SECREF7), as the assumptions are nearly impossible to satisfy fully, and it is all too easy to disprove the faithfulness of an interpretation method via a counter-example. What can be done? We argue for a more practical view of faithfulness, calling for a graded criteria that measures the extent and likelihood of an interpretation to be faithful, in practice (§SECREF8). While we started to work in this area, we pose the exact formalization of these criteria, and concrete evaluations methods for them, as a central challenge to the community for the coming future.
Faithfulness vs. Plausibility
There is considerable research effort in attempting to define and categorize the desiderata of a learned system's interpretation, most of which revolves around specific use-cases BIBREF17, BIBREF15.
Two particularly notable criteria, each useful for a different purposes, are plausibility and faithfulness. “Plausibility” refers to how convincing the interpretation is to humans, while “faithfulness” refers to how accurately it reflects the true reasoning process of the model BIBREF2, BIBREF18.
Naturally, it is possible to satisfy one of these properties without the other. For example, consider the case of interpretation via post-hoc text generation—where an additional “generator” component outputs a textual explanation of the model's decision, and the generator is learned with supervision of textual explanations BIBREF19, BIBREF20, BIBREF21. In this case, plausibility is the dominating property, while there is no faithfulness guarantee.
Despite the difference between the two criteria, many authors do not clearly make the distinction, and sometimes conflate the two. Moreoever, the majority of works do not explicitly name the criteria under consideration, even when they clearly belong to one camp or the other.
We argue that this conflation is dangerous. For example, consider the case of recidivism prediction, where a judge is exposed to a model's prediction and its interpretation, and the judge believes the interpretation to reflect the model's reasoning process. Since the interpretation's faithfulness carries legal consequences, a plausible but unfaithful interpretation may be the worst-case scenario. The lack of explicit claims by research may cause misinformation to potential users of the technology, who are not versed in its inner workings. Therefore, clear distinction between these terms is critical.
Inherently Interpretable?
A distinction is often made between two methods of achieving interpretability: (1) interpreting existing models via post-hoc techniques; and (2) designing inherently interpretable models. BIBREF29 argues in favor of inherently interpretable models, which by design claim to provide more faithful interpretations than post-hoc interpretation of black-box models.
We warn against taking this argumentation at face-value: a method being “inherently interpretable” is merely a claim that needs to be verified before it can be trusted. Indeed, while attention mechanisms have been considered as “inherently interpretable” BIBREF30, BIBREF31, recent work cast doubt regarding their faithfulness BIBREF32, BIBREF33, BIBREF18.
Evaluation via Utility
While explanations have many different use-cases, such as model debugging, lawful guarantees or health-critical guarantees, one other possible use-case with particularly prominent evaluation literature is Intelligent User Interfaces (IUI), via Human-Computer Interaction (HCI), of automatic models assisting human decision-makers. In this case, the goal of the explanation is to increase the degree of trust between the user and the system, giving the user more nuance towards whether the system's decision is likely correct, or not. In the general case, the final evaluation metric is the performance of the user at their task BIBREF34. For example, BIBREF35 evaluate various explanations of a model in a setting of trivia question answering.
However, in the context of faithfulness, we must warn against HCI-inspired evaluation, as well: increased performance in this setting is not indicative of faithfulness; rather, it is indicative of correlation between the plausibility of the explanations and the model's performance.
To illustrate, consider the following fictional case of a non-faithful explanation system, in an HCI evaluation setting: the explanation given is a heat-map of the textual input, attributing scores to various tokens. Assume the system explanations behave in the following way: when the output is correct, the explanation consists of random content words; and when the output is incorrect, it consists of random punctuation marks. In other words, the explanation is more likely to appear plausible when the model is correct, while at the same time not reflecting the true decision process of the model. The user, convinced by the nicer-looking explanations, performs better using this system. However, the explanation consistently claimed random tokens to be highly relevant to the model's reasoning process. While the system is concretely useful, the claims given by the explanation do not reflect the model's decisions whatsoever (by design).
While the above scenario is extreme, this misunderstanding is not entirely unlikely, since any degree of correlation between plausibility and model performance will result in increased user performance, regardless of any notion of faithfulness.
Guidelines for Evaluating Faithfulness
We propose the following guidelines for evaluating the faithfulness of explanations. These guidelines address common pitfalls and sub-optimal practices we observed in the literature.
Guidelines for Evaluating Faithfulness ::: Be explicit in what you evaluate.
Conflating plausability and faithfulness is harmful. You should be explicit on which one of them you evaluate, and use suitable methodologies for each one. Of course, the same applies when designing interpretation techniques—be clear about which properties are being prioritized.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-judgement on the quality of interpretation.
We note that: (1) humans cannot judge if an interpretation is faithful or not: if they understood the model, interpretation would be unnecessary; (2) for similar reasons, we cannot obtain supervision for this problem, either. Therefore, human judgement should not be involved in evaluation for faithfulness, as human judgement measures plausability.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-provided gold labels.
We should be able to interpret incorrect model predictions, just the same as correct ones. Evaluation methods that rely on gold labels are influenced by human priors on what should the model do, and again push the evaluation in the direction of plausability.
Guidelines for Evaluating Faithfulness ::: Do not trust “inherent interpretability” claims.
Inherent interpretability is a claim until proven otherwise. Explanations provided by “inherently interpretable” models must be held to the same standards as post-hoc interpretation methods, and be evaluated for faithfulness using the same set of evaluation techniques.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation of IUI systems should not rely on user performance.
End-task user performance in HCI settings is merely indicative of correlation between plausibility and model performance, however small this correlation is. While important to evaluate the utility of the interpretations for some use-cases, it is unrelated to faithfulness.
Defining Faithfulness
What does it mean for an interpretation method to be faithful? Intuitively, we would like the provided interpretation to reflect the true reasoning process of the model when making a decision. But what is a reasoning process of a model, and how can reasoning processes be compared to each other?
Lacking a standard definition, different works evaluate their methods by introducing tests to measure properties that they believe good interpretations should satisfy. Some of these tests measure aspects of faithfulness. These ad-hoc definitions are often unique to each paper and inconsistent with each other, making it hard to find commonalities.
We uncover three assumptions that underlie all these methods, enabling us to organize the literature along standardized axes, and relate seemingly distinct lines of work. Moreover, exposing the underlying assumptions enables an informed discussion regarding their validity and merit (we leave such a discussion for future work, by us or others).
These assumptions, to our knowledge, encapsulate the current working definitions of faithfulness used by the research community.
Defining Faithfulness ::: Assumption 1 (The Model Assumption).
Two models will make the same predictions if and only if they use the same reasoning process.
Corollary 1.1. An interpretation system is unfaithful if it results in different interpretations of models that make the same decisions.
As demonstrated by a recent example concerning NLP models, it can be used for proof by counter-example. Theoretically, if all possible models which can perfectly mimic the model's decisions also provide the same interpretations, then they could be deemed faithful. Conversely, showing that two models provide the same results but different interpretations, disprove the faithfulness of the method. BIBREF18 show how these counter-examples can be derived with adversarial training of models which can mimic the original model, yet provide different explanations.
Corollary 1.2. An interpretation is unfaithful if it results in different decisions than the model it interprets.
A more direct application of the Model Assumption is via the notion of fidelity BIBREF15, BIBREF8. For cases in which the explanation is itself a model capable of making decisions (e.g., decision trees or rule lists BIBREF36), fidelity is defined as the degree to which the explanation model can mimic the original model's decisions (as an accuracy score). For cases where the explanation is not a computable model, BIBREF37 propose a simple way of mapping explanations to decisions via crowd-sourcing, by asking humans to simulate the model's decision without any access to the model, and only access to the input and explanation (termed forward simulation). This idea is further explored and used in practice by BIBREF38.
Defining Faithfulness ::: Assumption 2 (The Prediction Assumption).
On similar inputs, the model makes similar decisions if and only if its reasoning is similar.
Corollary 2. An interpretation system is unfaithful if it provides different interpretations for similar inputs and outputs.
Since the interpretation serves as a proxy for the model's “reasoning”, it should satisfy the same constraints. In other words, interpretations of similar decisions should be similar, and interpretations of dissimilar decisions should be dissimilar.
This assumption is more useful to disprove the faithfulness of an interpretation rather than prove it, since a disproof requires finding appropriate cases where the assumption doesn't hold, where a proof would require checking a (very large) satisfactory quantity of examples, or even the entire input space.
One recent discussion in the NLP community BIBREF33, BIBREF18 concerns the use of this underlying assumption for evaluating attention heat-maps as explanations. The former attempts to provide different explanations of similar decisions per instance. The latter critiques the former and is based more heavily on the model assumption, described above.
Additionally, BIBREF39 propose to introduce a constant shift to the input space, and evaluate whether the explanation changes significantly as the final decision stays the same. BIBREF16 formalize a generalization of this technique under the term interpretability robustness: interpretations should be invariant to small perturbations in the input (a direct consequence of the prediction assumption). BIBREF40 further expand on this notion as “consistency of the explanation with respect to the model”. Unfortunately, robustness measures are difficult to apply in NLP settings due to the discrete input.
Defining Faithfulness ::: Assumption 3 (The Linearity Assumption).
Certain parts of the input are more important to the model reasoning than others. Moreover, the contributions of different parts of the input are independent from each other.
Corollary 3. Under certain circumstances, heat-map interpretations can be faithful.
This assumption is employed by methods that consider heat-maps (e.g., attention maps) over the input as explanations, particularly popular in NLP. Heat-maps are claims about which parts of the input are more relevant than others to the model's decision. As such, we can design “stress tests” to verify whether they uphold their claims.
One method proposed to do so is erasure, where the “most relevant” parts of the input—according to the explanation—are erased from the input, in expectation that the model's decision will change BIBREF25, BIBREF42, BIBREF32. Otherwise, the “least relevant” parts of the input may be erased, in expectation that the model's decision will not change BIBREF43. BIBREF44, BIBREF45 propose two measures of comprehensiveness and sufficiency as a formal generalization of erasure: as the degree by which the model is influenced by the removal of the high-ranking features, or by inclusion of solely the high-ranking features.
Is Faithful Interpretation Impossible?
The aforementioned assumptions are currently utilized to evaluate faithfulness in a binary manner, whether an interpretation is strictly faithful or not. Specifically, they are most often used to show that a method is not faithful, by constructing cases in which the assumptions do not hold for the suggested method. In other words, there is a clear trend of proof via counter-example, for various interpretation methods, that they are not globally faithful.
We claim that this is unproductive, as we expect these various methods to consistently result in negative (not faithful) results, continuing the current trend. This follows because an interpretation functions as an approximation of the model or decision's true reasoning process, so it by definition loses information. By the pigeonhole principle, there will be inputs with deviation between interpretation and reasoning.
This is observed in practice, in numerous work that show adversarial behavior, or pathological behaviours, that arise from the deeply non-linear and high-dimensional decision boundaries of current models. Furthermore, because we lack supervision regarding which models or decisions are indeed mappable to human-readable concepts, we cannot ignore the approximation errors.
This poses a high bar for explanation methods to fulfill, a bar which we estimate will not be overcome soon, if at all. What should we do, then, if we desire a system that provides faithful explanations?
Towards Better Faithfulness Criteria
We argue that a way out of this standstill is in a more practical and nuanced methodology for defining and evaluating faithfulness. We propose the following challenge to the community: We must develop formal definition and evaluation for faithfulness that allows us the freedom to say when a method is sufficiently faithful to be useful in practice.
We note two possible approaches to this end:
Across models and tasks: The degree (as grayscale) of faithfulness at the level of specific models and tasks. Perhaps some models or tasks allow sufficiently faithful interpretation, even if the same is not true for others.
For example, the method may not be faithful for some question-answering task, but faithful for movie review sentiment, perhaps based on various syntactic and semantic attributes of those tasks.
Across input space: The degree of faithfulness at the level of subspaces of the input space, such as neighborhoods of similar inputs, or singular inputs themselves. If we are able to say with some degree of confidence whether a specific decision's explanation is faithful to the model, even if the interpretation method is not considered universally faithful, it can be used with respect to those specific areas or instances only.
Conclusion
The opinion proposed in this paper is two-fold:
First, interpretability evaluation often conflates evaluating faithfulness and plausibility together. We should tease apart the two definitions and focus solely on evaluating faithfulness without any supervision or influence of the convincing power of the interpretation.
Second, faithfulness is often evaluated in a binary “faithful or not faithful” manner, and we believe strictly faithful interpretation is a “unicorn” which will likely never be found. We should instead evaluate faithfulness on a more nuanced “grayscale” that allows interpretations to be useful even if they are not globally and definitively faithful.
Acknowledgements
We thank Yanai Elazar for welcome input on the presentation and organization of the paper. We also thank the reviewers for additional feedback and pointing to relevant literature in HCI and IUI.
This project has received funding from the Europoean Research Council (ERC) under the Europoean Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT). | Across models and tasks: The degree (as grayscale) of faithfulness at the level of specific models and tasks., Across input space: The degree of faithfulness at the level of subspaces of the input space, such as neighborhoods of similar inputs, or singular inputs themselves. |
3371d586a3a81de1552d90459709c57c0b1a2594 | 3371d586a3a81de1552d90459709c57c0b1a2594_0 | Q: What faithfulness criteria does they propose?
Text: Introduction
Fueled by recent advances in deep-learning and language processing, NLP systems are increasingly being used for prediction and decision-making in many fields BIBREF0, including sensitive ones such as health, commerce and law BIBREF1. Unfortunately, these highly flexible and highly effective neural models are also opaque. There is therefore a critical need for explaining learning-based models' decisions.
The emerging research topic of interpretability or explainability has grown rapidly in recent years. Unfortunately, not without growing pains.
One such pain is the challenge of defining—and evaluating—what constitutes a quality interpretation. Current approaches define interpretation in a rather ad-hoc manner, motivated by practical use-cases and applications. However, this view often fails to distinguish between distinct aspects of the interpretation's quality, such as readability, plausibility and faithfulness BIBREF2. We argue (§SECREF2, §SECREF5) such conflation is harmful, and that faithfulness should be defined and evaluated explicitly, and independently from plausibility.
Our main focus is the evaluation of the faithfulness of an explanation. Intuitively, a faithful interpretation is one that accurately represents the reasoning process behind the model's prediction. We find this to be a pressing issue in explainability: in cases where an explanation is required to be faithful, imperfect or misleading evaluation can have disastrous effects.
While literature in this area may implicitly or explicitly evaluate faithfulness for specific explanation techniques, there is no consistent and formal definition of faithfulness. We uncover three assumptions that underlie all these attempts. By making the assumptions explicit and organizing the literature around them, we “connect the dots” between seemingly distinct evaluation methods, and also provide a basis for discussion regarding the desirable properties of faithfulness (§SECREF6).
Finally, we observe a trend by which faithfulness is treated as a binary property, followed by showing that an interpretation method is not faithful. We claim that this is unproductive (§SECREF7), as the assumptions are nearly impossible to satisfy fully, and it is all too easy to disprove the faithfulness of an interpretation method via a counter-example. What can be done? We argue for a more practical view of faithfulness, calling for a graded criteria that measures the extent and likelihood of an interpretation to be faithful, in practice (§SECREF8). While we started to work in this area, we pose the exact formalization of these criteria, and concrete evaluations methods for them, as a central challenge to the community for the coming future.
Faithfulness vs. Plausibility
There is considerable research effort in attempting to define and categorize the desiderata of a learned system's interpretation, most of which revolves around specific use-cases BIBREF17, BIBREF15.
Two particularly notable criteria, each useful for a different purposes, are plausibility and faithfulness. “Plausibility” refers to how convincing the interpretation is to humans, while “faithfulness” refers to how accurately it reflects the true reasoning process of the model BIBREF2, BIBREF18.
Naturally, it is possible to satisfy one of these properties without the other. For example, consider the case of interpretation via post-hoc text generation—where an additional “generator” component outputs a textual explanation of the model's decision, and the generator is learned with supervision of textual explanations BIBREF19, BIBREF20, BIBREF21. In this case, plausibility is the dominating property, while there is no faithfulness guarantee.
Despite the difference between the two criteria, many authors do not clearly make the distinction, and sometimes conflate the two. Moreoever, the majority of works do not explicitly name the criteria under consideration, even when they clearly belong to one camp or the other.
We argue that this conflation is dangerous. For example, consider the case of recidivism prediction, where a judge is exposed to a model's prediction and its interpretation, and the judge believes the interpretation to reflect the model's reasoning process. Since the interpretation's faithfulness carries legal consequences, a plausible but unfaithful interpretation may be the worst-case scenario. The lack of explicit claims by research may cause misinformation to potential users of the technology, who are not versed in its inner workings. Therefore, clear distinction between these terms is critical.
Inherently Interpretable?
A distinction is often made between two methods of achieving interpretability: (1) interpreting existing models via post-hoc techniques; and (2) designing inherently interpretable models. BIBREF29 argues in favor of inherently interpretable models, which by design claim to provide more faithful interpretations than post-hoc interpretation of black-box models.
We warn against taking this argumentation at face-value: a method being “inherently interpretable” is merely a claim that needs to be verified before it can be trusted. Indeed, while attention mechanisms have been considered as “inherently interpretable” BIBREF30, BIBREF31, recent work cast doubt regarding their faithfulness BIBREF32, BIBREF33, BIBREF18.
Evaluation via Utility
While explanations have many different use-cases, such as model debugging, lawful guarantees or health-critical guarantees, one other possible use-case with particularly prominent evaluation literature is Intelligent User Interfaces (IUI), via Human-Computer Interaction (HCI), of automatic models assisting human decision-makers. In this case, the goal of the explanation is to increase the degree of trust between the user and the system, giving the user more nuance towards whether the system's decision is likely correct, or not. In the general case, the final evaluation metric is the performance of the user at their task BIBREF34. For example, BIBREF35 evaluate various explanations of a model in a setting of trivia question answering.
However, in the context of faithfulness, we must warn against HCI-inspired evaluation, as well: increased performance in this setting is not indicative of faithfulness; rather, it is indicative of correlation between the plausibility of the explanations and the model's performance.
To illustrate, consider the following fictional case of a non-faithful explanation system, in an HCI evaluation setting: the explanation given is a heat-map of the textual input, attributing scores to various tokens. Assume the system explanations behave in the following way: when the output is correct, the explanation consists of random content words; and when the output is incorrect, it consists of random punctuation marks. In other words, the explanation is more likely to appear plausible when the model is correct, while at the same time not reflecting the true decision process of the model. The user, convinced by the nicer-looking explanations, performs better using this system. However, the explanation consistently claimed random tokens to be highly relevant to the model's reasoning process. While the system is concretely useful, the claims given by the explanation do not reflect the model's decisions whatsoever (by design).
While the above scenario is extreme, this misunderstanding is not entirely unlikely, since any degree of correlation between plausibility and model performance will result in increased user performance, regardless of any notion of faithfulness.
Guidelines for Evaluating Faithfulness
We propose the following guidelines for evaluating the faithfulness of explanations. These guidelines address common pitfalls and sub-optimal practices we observed in the literature.
Guidelines for Evaluating Faithfulness ::: Be explicit in what you evaluate.
Conflating plausability and faithfulness is harmful. You should be explicit on which one of them you evaluate, and use suitable methodologies for each one. Of course, the same applies when designing interpretation techniques—be clear about which properties are being prioritized.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-judgement on the quality of interpretation.
We note that: (1) humans cannot judge if an interpretation is faithful or not: if they understood the model, interpretation would be unnecessary; (2) for similar reasons, we cannot obtain supervision for this problem, either. Therefore, human judgement should not be involved in evaluation for faithfulness, as human judgement measures plausability.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-provided gold labels.
We should be able to interpret incorrect model predictions, just the same as correct ones. Evaluation methods that rely on gold labels are influenced by human priors on what should the model do, and again push the evaluation in the direction of plausability.
Guidelines for Evaluating Faithfulness ::: Do not trust “inherent interpretability” claims.
Inherent interpretability is a claim until proven otherwise. Explanations provided by “inherently interpretable” models must be held to the same standards as post-hoc interpretation methods, and be evaluated for faithfulness using the same set of evaluation techniques.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation of IUI systems should not rely on user performance.
End-task user performance in HCI settings is merely indicative of correlation between plausibility and model performance, however small this correlation is. While important to evaluate the utility of the interpretations for some use-cases, it is unrelated to faithfulness.
Defining Faithfulness
What does it mean for an interpretation method to be faithful? Intuitively, we would like the provided interpretation to reflect the true reasoning process of the model when making a decision. But what is a reasoning process of a model, and how can reasoning processes be compared to each other?
Lacking a standard definition, different works evaluate their methods by introducing tests to measure properties that they believe good interpretations should satisfy. Some of these tests measure aspects of faithfulness. These ad-hoc definitions are often unique to each paper and inconsistent with each other, making it hard to find commonalities.
We uncover three assumptions that underlie all these methods, enabling us to organize the literature along standardized axes, and relate seemingly distinct lines of work. Moreover, exposing the underlying assumptions enables an informed discussion regarding their validity and merit (we leave such a discussion for future work, by us or others).
These assumptions, to our knowledge, encapsulate the current working definitions of faithfulness used by the research community.
Defining Faithfulness ::: Assumption 1 (The Model Assumption).
Two models will make the same predictions if and only if they use the same reasoning process.
Corollary 1.1. An interpretation system is unfaithful if it results in different interpretations of models that make the same decisions.
As demonstrated by a recent example concerning NLP models, it can be used for proof by counter-example. Theoretically, if all possible models which can perfectly mimic the model's decisions also provide the same interpretations, then they could be deemed faithful. Conversely, showing that two models provide the same results but different interpretations, disprove the faithfulness of the method. BIBREF18 show how these counter-examples can be derived with adversarial training of models which can mimic the original model, yet provide different explanations.
Corollary 1.2. An interpretation is unfaithful if it results in different decisions than the model it interprets.
A more direct application of the Model Assumption is via the notion of fidelity BIBREF15, BIBREF8. For cases in which the explanation is itself a model capable of making decisions (e.g., decision trees or rule lists BIBREF36), fidelity is defined as the degree to which the explanation model can mimic the original model's decisions (as an accuracy score). For cases where the explanation is not a computable model, BIBREF37 propose a simple way of mapping explanations to decisions via crowd-sourcing, by asking humans to simulate the model's decision without any access to the model, and only access to the input and explanation (termed forward simulation). This idea is further explored and used in practice by BIBREF38.
Defining Faithfulness ::: Assumption 2 (The Prediction Assumption).
On similar inputs, the model makes similar decisions if and only if its reasoning is similar.
Corollary 2. An interpretation system is unfaithful if it provides different interpretations for similar inputs and outputs.
Since the interpretation serves as a proxy for the model's “reasoning”, it should satisfy the same constraints. In other words, interpretations of similar decisions should be similar, and interpretations of dissimilar decisions should be dissimilar.
This assumption is more useful to disprove the faithfulness of an interpretation rather than prove it, since a disproof requires finding appropriate cases where the assumption doesn't hold, where a proof would require checking a (very large) satisfactory quantity of examples, or even the entire input space.
One recent discussion in the NLP community BIBREF33, BIBREF18 concerns the use of this underlying assumption for evaluating attention heat-maps as explanations. The former attempts to provide different explanations of similar decisions per instance. The latter critiques the former and is based more heavily on the model assumption, described above.
Additionally, BIBREF39 propose to introduce a constant shift to the input space, and evaluate whether the explanation changes significantly as the final decision stays the same. BIBREF16 formalize a generalization of this technique under the term interpretability robustness: interpretations should be invariant to small perturbations in the input (a direct consequence of the prediction assumption). BIBREF40 further expand on this notion as “consistency of the explanation with respect to the model”. Unfortunately, robustness measures are difficult to apply in NLP settings due to the discrete input.
Defining Faithfulness ::: Assumption 3 (The Linearity Assumption).
Certain parts of the input are more important to the model reasoning than others. Moreover, the contributions of different parts of the input are independent from each other.
Corollary 3. Under certain circumstances, heat-map interpretations can be faithful.
This assumption is employed by methods that consider heat-maps (e.g., attention maps) over the input as explanations, particularly popular in NLP. Heat-maps are claims about which parts of the input are more relevant than others to the model's decision. As such, we can design “stress tests” to verify whether they uphold their claims.
One method proposed to do so is erasure, where the “most relevant” parts of the input—according to the explanation—are erased from the input, in expectation that the model's decision will change BIBREF25, BIBREF42, BIBREF32. Otherwise, the “least relevant” parts of the input may be erased, in expectation that the model's decision will not change BIBREF43. BIBREF44, BIBREF45 propose two measures of comprehensiveness and sufficiency as a formal generalization of erasure: as the degree by which the model is influenced by the removal of the high-ranking features, or by inclusion of solely the high-ranking features.
Is Faithful Interpretation Impossible?
The aforementioned assumptions are currently utilized to evaluate faithfulness in a binary manner, whether an interpretation is strictly faithful or not. Specifically, they are most often used to show that a method is not faithful, by constructing cases in which the assumptions do not hold for the suggested method. In other words, there is a clear trend of proof via counter-example, for various interpretation methods, that they are not globally faithful.
We claim that this is unproductive, as we expect these various methods to consistently result in negative (not faithful) results, continuing the current trend. This follows because an interpretation functions as an approximation of the model or decision's true reasoning process, so it by definition loses information. By the pigeonhole principle, there will be inputs with deviation between interpretation and reasoning.
This is observed in practice, in numerous work that show adversarial behavior, or pathological behaviours, that arise from the deeply non-linear and high-dimensional decision boundaries of current models. Furthermore, because we lack supervision regarding which models or decisions are indeed mappable to human-readable concepts, we cannot ignore the approximation errors.
This poses a high bar for explanation methods to fulfill, a bar which we estimate will not be overcome soon, if at all. What should we do, then, if we desire a system that provides faithful explanations?
Towards Better Faithfulness Criteria
We argue that a way out of this standstill is in a more practical and nuanced methodology for defining and evaluating faithfulness. We propose the following challenge to the community: We must develop formal definition and evaluation for faithfulness that allows us the freedom to say when a method is sufficiently faithful to be useful in practice.
We note two possible approaches to this end:
Across models and tasks: The degree (as grayscale) of faithfulness at the level of specific models and tasks. Perhaps some models or tasks allow sufficiently faithful interpretation, even if the same is not true for others.
For example, the method may not be faithful for some question-answering task, but faithful for movie review sentiment, perhaps based on various syntactic and semantic attributes of those tasks.
Across input space: The degree of faithfulness at the level of subspaces of the input space, such as neighborhoods of similar inputs, or singular inputs themselves. If we are able to say with some degree of confidence whether a specific decision's explanation is faithful to the model, even if the interpretation method is not considered universally faithful, it can be used with respect to those specific areas or instances only.
Conclusion
The opinion proposed in this paper is two-fold:
First, interpretability evaluation often conflates evaluating faithfulness and plausibility together. We should tease apart the two definitions and focus solely on evaluating faithfulness without any supervision or influence of the convincing power of the interpretation.
Second, faithfulness is often evaluated in a binary “faithful or not faithful” manner, and we believe strictly faithful interpretation is a “unicorn” which will likely never be found. We should instead evaluate faithfulness on a more nuanced “grayscale” that allows interpretations to be useful even if they are not globally and definitively faithful.
Acknowledgements
We thank Yanai Elazar for welcome input on the presentation and organization of the paper. We also thank the reviewers for additional feedback and pointing to relevant literature in HCI and IUI.
This project has received funding from the Europoean Research Council (ERC) under the Europoean Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT). | Across models and tasks: The degree (as grayscale) of faithfulness at the level of specific models and tasks., Across input space: The degree of faithfulness at the level of subspaces of the input space, such as neighborhoods of similar inputs, or singular inputs themselves. |
d4b9cdb4b2dfda1e0d96ab6c3b5e2157fd52685e | d4b9cdb4b2dfda1e0d96ab6c3b5e2157fd52685e_0 | Q: Which are three assumptions in current approaches for defining faithfulness?
Text: Introduction
Fueled by recent advances in deep-learning and language processing, NLP systems are increasingly being used for prediction and decision-making in many fields BIBREF0, including sensitive ones such as health, commerce and law BIBREF1. Unfortunately, these highly flexible and highly effective neural models are also opaque. There is therefore a critical need for explaining learning-based models' decisions.
The emerging research topic of interpretability or explainability has grown rapidly in recent years. Unfortunately, not without growing pains.
One such pain is the challenge of defining—and evaluating—what constitutes a quality interpretation. Current approaches define interpretation in a rather ad-hoc manner, motivated by practical use-cases and applications. However, this view often fails to distinguish between distinct aspects of the interpretation's quality, such as readability, plausibility and faithfulness BIBREF2. We argue (§SECREF2, §SECREF5) such conflation is harmful, and that faithfulness should be defined and evaluated explicitly, and independently from plausibility.
Our main focus is the evaluation of the faithfulness of an explanation. Intuitively, a faithful interpretation is one that accurately represents the reasoning process behind the model's prediction. We find this to be a pressing issue in explainability: in cases where an explanation is required to be faithful, imperfect or misleading evaluation can have disastrous effects.
While literature in this area may implicitly or explicitly evaluate faithfulness for specific explanation techniques, there is no consistent and formal definition of faithfulness. We uncover three assumptions that underlie all these attempts. By making the assumptions explicit and organizing the literature around them, we “connect the dots” between seemingly distinct evaluation methods, and also provide a basis for discussion regarding the desirable properties of faithfulness (§SECREF6).
Finally, we observe a trend by which faithfulness is treated as a binary property, followed by showing that an interpretation method is not faithful. We claim that this is unproductive (§SECREF7), as the assumptions are nearly impossible to satisfy fully, and it is all too easy to disprove the faithfulness of an interpretation method via a counter-example. What can be done? We argue for a more practical view of faithfulness, calling for a graded criteria that measures the extent and likelihood of an interpretation to be faithful, in practice (§SECREF8). While we started to work in this area, we pose the exact formalization of these criteria, and concrete evaluations methods for them, as a central challenge to the community for the coming future.
Faithfulness vs. Plausibility
There is considerable research effort in attempting to define and categorize the desiderata of a learned system's interpretation, most of which revolves around specific use-cases BIBREF17, BIBREF15.
Two particularly notable criteria, each useful for a different purposes, are plausibility and faithfulness. “Plausibility” refers to how convincing the interpretation is to humans, while “faithfulness” refers to how accurately it reflects the true reasoning process of the model BIBREF2, BIBREF18.
Naturally, it is possible to satisfy one of these properties without the other. For example, consider the case of interpretation via post-hoc text generation—where an additional “generator” component outputs a textual explanation of the model's decision, and the generator is learned with supervision of textual explanations BIBREF19, BIBREF20, BIBREF21. In this case, plausibility is the dominating property, while there is no faithfulness guarantee.
Despite the difference between the two criteria, many authors do not clearly make the distinction, and sometimes conflate the two. Moreoever, the majority of works do not explicitly name the criteria under consideration, even when they clearly belong to one camp or the other.
We argue that this conflation is dangerous. For example, consider the case of recidivism prediction, where a judge is exposed to a model's prediction and its interpretation, and the judge believes the interpretation to reflect the model's reasoning process. Since the interpretation's faithfulness carries legal consequences, a plausible but unfaithful interpretation may be the worst-case scenario. The lack of explicit claims by research may cause misinformation to potential users of the technology, who are not versed in its inner workings. Therefore, clear distinction between these terms is critical.
Inherently Interpretable?
A distinction is often made between two methods of achieving interpretability: (1) interpreting existing models via post-hoc techniques; and (2) designing inherently interpretable models. BIBREF29 argues in favor of inherently interpretable models, which by design claim to provide more faithful interpretations than post-hoc interpretation of black-box models.
We warn against taking this argumentation at face-value: a method being “inherently interpretable” is merely a claim that needs to be verified before it can be trusted. Indeed, while attention mechanisms have been considered as “inherently interpretable” BIBREF30, BIBREF31, recent work cast doubt regarding their faithfulness BIBREF32, BIBREF33, BIBREF18.
Evaluation via Utility
While explanations have many different use-cases, such as model debugging, lawful guarantees or health-critical guarantees, one other possible use-case with particularly prominent evaluation literature is Intelligent User Interfaces (IUI), via Human-Computer Interaction (HCI), of automatic models assisting human decision-makers. In this case, the goal of the explanation is to increase the degree of trust between the user and the system, giving the user more nuance towards whether the system's decision is likely correct, or not. In the general case, the final evaluation metric is the performance of the user at their task BIBREF34. For example, BIBREF35 evaluate various explanations of a model in a setting of trivia question answering.
However, in the context of faithfulness, we must warn against HCI-inspired evaluation, as well: increased performance in this setting is not indicative of faithfulness; rather, it is indicative of correlation between the plausibility of the explanations and the model's performance.
To illustrate, consider the following fictional case of a non-faithful explanation system, in an HCI evaluation setting: the explanation given is a heat-map of the textual input, attributing scores to various tokens. Assume the system explanations behave in the following way: when the output is correct, the explanation consists of random content words; and when the output is incorrect, it consists of random punctuation marks. In other words, the explanation is more likely to appear plausible when the model is correct, while at the same time not reflecting the true decision process of the model. The user, convinced by the nicer-looking explanations, performs better using this system. However, the explanation consistently claimed random tokens to be highly relevant to the model's reasoning process. While the system is concretely useful, the claims given by the explanation do not reflect the model's decisions whatsoever (by design).
While the above scenario is extreme, this misunderstanding is not entirely unlikely, since any degree of correlation between plausibility and model performance will result in increased user performance, regardless of any notion of faithfulness.
Guidelines for Evaluating Faithfulness
We propose the following guidelines for evaluating the faithfulness of explanations. These guidelines address common pitfalls and sub-optimal practices we observed in the literature.
Guidelines for Evaluating Faithfulness ::: Be explicit in what you evaluate.
Conflating plausability and faithfulness is harmful. You should be explicit on which one of them you evaluate, and use suitable methodologies for each one. Of course, the same applies when designing interpretation techniques—be clear about which properties are being prioritized.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-judgement on the quality of interpretation.
We note that: (1) humans cannot judge if an interpretation is faithful or not: if they understood the model, interpretation would be unnecessary; (2) for similar reasons, we cannot obtain supervision for this problem, either. Therefore, human judgement should not be involved in evaluation for faithfulness, as human judgement measures plausability.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-provided gold labels.
We should be able to interpret incorrect model predictions, just the same as correct ones. Evaluation methods that rely on gold labels are influenced by human priors on what should the model do, and again push the evaluation in the direction of plausability.
Guidelines for Evaluating Faithfulness ::: Do not trust “inherent interpretability” claims.
Inherent interpretability is a claim until proven otherwise. Explanations provided by “inherently interpretable” models must be held to the same standards as post-hoc interpretation methods, and be evaluated for faithfulness using the same set of evaluation techniques.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation of IUI systems should not rely on user performance.
End-task user performance in HCI settings is merely indicative of correlation between plausibility and model performance, however small this correlation is. While important to evaluate the utility of the interpretations for some use-cases, it is unrelated to faithfulness.
Defining Faithfulness
What does it mean for an interpretation method to be faithful? Intuitively, we would like the provided interpretation to reflect the true reasoning process of the model when making a decision. But what is a reasoning process of a model, and how can reasoning processes be compared to each other?
Lacking a standard definition, different works evaluate their methods by introducing tests to measure properties that they believe good interpretations should satisfy. Some of these tests measure aspects of faithfulness. These ad-hoc definitions are often unique to each paper and inconsistent with each other, making it hard to find commonalities.
We uncover three assumptions that underlie all these methods, enabling us to organize the literature along standardized axes, and relate seemingly distinct lines of work. Moreover, exposing the underlying assumptions enables an informed discussion regarding their validity and merit (we leave such a discussion for future work, by us or others).
These assumptions, to our knowledge, encapsulate the current working definitions of faithfulness used by the research community.
Defining Faithfulness ::: Assumption 1 (The Model Assumption).
Two models will make the same predictions if and only if they use the same reasoning process.
Corollary 1.1. An interpretation system is unfaithful if it results in different interpretations of models that make the same decisions.
As demonstrated by a recent example concerning NLP models, it can be used for proof by counter-example. Theoretically, if all possible models which can perfectly mimic the model's decisions also provide the same interpretations, then they could be deemed faithful. Conversely, showing that two models provide the same results but different interpretations, disprove the faithfulness of the method. BIBREF18 show how these counter-examples can be derived with adversarial training of models which can mimic the original model, yet provide different explanations.
Corollary 1.2. An interpretation is unfaithful if it results in different decisions than the model it interprets.
A more direct application of the Model Assumption is via the notion of fidelity BIBREF15, BIBREF8. For cases in which the explanation is itself a model capable of making decisions (e.g., decision trees or rule lists BIBREF36), fidelity is defined as the degree to which the explanation model can mimic the original model's decisions (as an accuracy score). For cases where the explanation is not a computable model, BIBREF37 propose a simple way of mapping explanations to decisions via crowd-sourcing, by asking humans to simulate the model's decision without any access to the model, and only access to the input and explanation (termed forward simulation). This idea is further explored and used in practice by BIBREF38.
Defining Faithfulness ::: Assumption 2 (The Prediction Assumption).
On similar inputs, the model makes similar decisions if and only if its reasoning is similar.
Corollary 2. An interpretation system is unfaithful if it provides different interpretations for similar inputs and outputs.
Since the interpretation serves as a proxy for the model's “reasoning”, it should satisfy the same constraints. In other words, interpretations of similar decisions should be similar, and interpretations of dissimilar decisions should be dissimilar.
This assumption is more useful to disprove the faithfulness of an interpretation rather than prove it, since a disproof requires finding appropriate cases where the assumption doesn't hold, where a proof would require checking a (very large) satisfactory quantity of examples, or even the entire input space.
One recent discussion in the NLP community BIBREF33, BIBREF18 concerns the use of this underlying assumption for evaluating attention heat-maps as explanations. The former attempts to provide different explanations of similar decisions per instance. The latter critiques the former and is based more heavily on the model assumption, described above.
Additionally, BIBREF39 propose to introduce a constant shift to the input space, and evaluate whether the explanation changes significantly as the final decision stays the same. BIBREF16 formalize a generalization of this technique under the term interpretability robustness: interpretations should be invariant to small perturbations in the input (a direct consequence of the prediction assumption). BIBREF40 further expand on this notion as “consistency of the explanation with respect to the model”. Unfortunately, robustness measures are difficult to apply in NLP settings due to the discrete input.
Defining Faithfulness ::: Assumption 3 (The Linearity Assumption).
Certain parts of the input are more important to the model reasoning than others. Moreover, the contributions of different parts of the input are independent from each other.
Corollary 3. Under certain circumstances, heat-map interpretations can be faithful.
This assumption is employed by methods that consider heat-maps (e.g., attention maps) over the input as explanations, particularly popular in NLP. Heat-maps are claims about which parts of the input are more relevant than others to the model's decision. As such, we can design “stress tests” to verify whether they uphold their claims.
One method proposed to do so is erasure, where the “most relevant” parts of the input—according to the explanation—are erased from the input, in expectation that the model's decision will change BIBREF25, BIBREF42, BIBREF32. Otherwise, the “least relevant” parts of the input may be erased, in expectation that the model's decision will not change BIBREF43. BIBREF44, BIBREF45 propose two measures of comprehensiveness and sufficiency as a formal generalization of erasure: as the degree by which the model is influenced by the removal of the high-ranking features, or by inclusion of solely the high-ranking features.
Is Faithful Interpretation Impossible?
The aforementioned assumptions are currently utilized to evaluate faithfulness in a binary manner, whether an interpretation is strictly faithful or not. Specifically, they are most often used to show that a method is not faithful, by constructing cases in which the assumptions do not hold for the suggested method. In other words, there is a clear trend of proof via counter-example, for various interpretation methods, that they are not globally faithful.
We claim that this is unproductive, as we expect these various methods to consistently result in negative (not faithful) results, continuing the current trend. This follows because an interpretation functions as an approximation of the model or decision's true reasoning process, so it by definition loses information. By the pigeonhole principle, there will be inputs with deviation between interpretation and reasoning.
This is observed in practice, in numerous work that show adversarial behavior, or pathological behaviours, that arise from the deeply non-linear and high-dimensional decision boundaries of current models. Furthermore, because we lack supervision regarding which models or decisions are indeed mappable to human-readable concepts, we cannot ignore the approximation errors.
This poses a high bar for explanation methods to fulfill, a bar which we estimate will not be overcome soon, if at all. What should we do, then, if we desire a system that provides faithful explanations?
Towards Better Faithfulness Criteria
We argue that a way out of this standstill is in a more practical and nuanced methodology for defining and evaluating faithfulness. We propose the following challenge to the community: We must develop formal definition and evaluation for faithfulness that allows us the freedom to say when a method is sufficiently faithful to be useful in practice.
We note two possible approaches to this end:
Across models and tasks: The degree (as grayscale) of faithfulness at the level of specific models and tasks. Perhaps some models or tasks allow sufficiently faithful interpretation, even if the same is not true for others.
For example, the method may not be faithful for some question-answering task, but faithful for movie review sentiment, perhaps based on various syntactic and semantic attributes of those tasks.
Across input space: The degree of faithfulness at the level of subspaces of the input space, such as neighborhoods of similar inputs, or singular inputs themselves. If we are able to say with some degree of confidence whether a specific decision's explanation is faithful to the model, even if the interpretation method is not considered universally faithful, it can be used with respect to those specific areas or instances only.
Conclusion
The opinion proposed in this paper is two-fold:
First, interpretability evaluation often conflates evaluating faithfulness and plausibility together. We should tease apart the two definitions and focus solely on evaluating faithfulness without any supervision or influence of the convincing power of the interpretation.
Second, faithfulness is often evaluated in a binary “faithful or not faithful” manner, and we believe strictly faithful interpretation is a “unicorn” which will likely never be found. We should instead evaluate faithfulness on a more nuanced “grayscale” that allows interpretations to be useful even if they are not globally and definitively faithful.
Acknowledgements
We thank Yanai Elazar for welcome input on the presentation and organization of the paper. We also thank the reviewers for additional feedback and pointing to relevant literature in HCI and IUI.
This project has received funding from the Europoean Research Council (ERC) under the Europoean Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT). | Two models will make the same predictions if and only if they use the same reasoning process., On similar inputs, the model makes similar decisions if and only if its reasoning is similar., Certain parts of the input are more important to the model reasoning than others. Moreover, the contributions of different parts of the input are independent from each other. |
d4b9cdb4b2dfda1e0d96ab6c3b5e2157fd52685e | d4b9cdb4b2dfda1e0d96ab6c3b5e2157fd52685e_1 | Q: Which are three assumptions in current approaches for defining faithfulness?
Text: Introduction
Fueled by recent advances in deep-learning and language processing, NLP systems are increasingly being used for prediction and decision-making in many fields BIBREF0, including sensitive ones such as health, commerce and law BIBREF1. Unfortunately, these highly flexible and highly effective neural models are also opaque. There is therefore a critical need for explaining learning-based models' decisions.
The emerging research topic of interpretability or explainability has grown rapidly in recent years. Unfortunately, not without growing pains.
One such pain is the challenge of defining—and evaluating—what constitutes a quality interpretation. Current approaches define interpretation in a rather ad-hoc manner, motivated by practical use-cases and applications. However, this view often fails to distinguish between distinct aspects of the interpretation's quality, such as readability, plausibility and faithfulness BIBREF2. We argue (§SECREF2, §SECREF5) such conflation is harmful, and that faithfulness should be defined and evaluated explicitly, and independently from plausibility.
Our main focus is the evaluation of the faithfulness of an explanation. Intuitively, a faithful interpretation is one that accurately represents the reasoning process behind the model's prediction. We find this to be a pressing issue in explainability: in cases where an explanation is required to be faithful, imperfect or misleading evaluation can have disastrous effects.
While literature in this area may implicitly or explicitly evaluate faithfulness for specific explanation techniques, there is no consistent and formal definition of faithfulness. We uncover three assumptions that underlie all these attempts. By making the assumptions explicit and organizing the literature around them, we “connect the dots” between seemingly distinct evaluation methods, and also provide a basis for discussion regarding the desirable properties of faithfulness (§SECREF6).
Finally, we observe a trend by which faithfulness is treated as a binary property, followed by showing that an interpretation method is not faithful. We claim that this is unproductive (§SECREF7), as the assumptions are nearly impossible to satisfy fully, and it is all too easy to disprove the faithfulness of an interpretation method via a counter-example. What can be done? We argue for a more practical view of faithfulness, calling for a graded criteria that measures the extent and likelihood of an interpretation to be faithful, in practice (§SECREF8). While we started to work in this area, we pose the exact formalization of these criteria, and concrete evaluations methods for them, as a central challenge to the community for the coming future.
Faithfulness vs. Plausibility
There is considerable research effort in attempting to define and categorize the desiderata of a learned system's interpretation, most of which revolves around specific use-cases BIBREF17, BIBREF15.
Two particularly notable criteria, each useful for a different purposes, are plausibility and faithfulness. “Plausibility” refers to how convincing the interpretation is to humans, while “faithfulness” refers to how accurately it reflects the true reasoning process of the model BIBREF2, BIBREF18.
Naturally, it is possible to satisfy one of these properties without the other. For example, consider the case of interpretation via post-hoc text generation—where an additional “generator” component outputs a textual explanation of the model's decision, and the generator is learned with supervision of textual explanations BIBREF19, BIBREF20, BIBREF21. In this case, plausibility is the dominating property, while there is no faithfulness guarantee.
Despite the difference between the two criteria, many authors do not clearly make the distinction, and sometimes conflate the two. Moreoever, the majority of works do not explicitly name the criteria under consideration, even when they clearly belong to one camp or the other.
We argue that this conflation is dangerous. For example, consider the case of recidivism prediction, where a judge is exposed to a model's prediction and its interpretation, and the judge believes the interpretation to reflect the model's reasoning process. Since the interpretation's faithfulness carries legal consequences, a plausible but unfaithful interpretation may be the worst-case scenario. The lack of explicit claims by research may cause misinformation to potential users of the technology, who are not versed in its inner workings. Therefore, clear distinction between these terms is critical.
Inherently Interpretable?
A distinction is often made between two methods of achieving interpretability: (1) interpreting existing models via post-hoc techniques; and (2) designing inherently interpretable models. BIBREF29 argues in favor of inherently interpretable models, which by design claim to provide more faithful interpretations than post-hoc interpretation of black-box models.
We warn against taking this argumentation at face-value: a method being “inherently interpretable” is merely a claim that needs to be verified before it can be trusted. Indeed, while attention mechanisms have been considered as “inherently interpretable” BIBREF30, BIBREF31, recent work cast doubt regarding their faithfulness BIBREF32, BIBREF33, BIBREF18.
Evaluation via Utility
While explanations have many different use-cases, such as model debugging, lawful guarantees or health-critical guarantees, one other possible use-case with particularly prominent evaluation literature is Intelligent User Interfaces (IUI), via Human-Computer Interaction (HCI), of automatic models assisting human decision-makers. In this case, the goal of the explanation is to increase the degree of trust between the user and the system, giving the user more nuance towards whether the system's decision is likely correct, or not. In the general case, the final evaluation metric is the performance of the user at their task BIBREF34. For example, BIBREF35 evaluate various explanations of a model in a setting of trivia question answering.
However, in the context of faithfulness, we must warn against HCI-inspired evaluation, as well: increased performance in this setting is not indicative of faithfulness; rather, it is indicative of correlation between the plausibility of the explanations and the model's performance.
To illustrate, consider the following fictional case of a non-faithful explanation system, in an HCI evaluation setting: the explanation given is a heat-map of the textual input, attributing scores to various tokens. Assume the system explanations behave in the following way: when the output is correct, the explanation consists of random content words; and when the output is incorrect, it consists of random punctuation marks. In other words, the explanation is more likely to appear plausible when the model is correct, while at the same time not reflecting the true decision process of the model. The user, convinced by the nicer-looking explanations, performs better using this system. However, the explanation consistently claimed random tokens to be highly relevant to the model's reasoning process. While the system is concretely useful, the claims given by the explanation do not reflect the model's decisions whatsoever (by design).
While the above scenario is extreme, this misunderstanding is not entirely unlikely, since any degree of correlation between plausibility and model performance will result in increased user performance, regardless of any notion of faithfulness.
Guidelines for Evaluating Faithfulness
We propose the following guidelines for evaluating the faithfulness of explanations. These guidelines address common pitfalls and sub-optimal practices we observed in the literature.
Guidelines for Evaluating Faithfulness ::: Be explicit in what you evaluate.
Conflating plausability and faithfulness is harmful. You should be explicit on which one of them you evaluate, and use suitable methodologies for each one. Of course, the same applies when designing interpretation techniques—be clear about which properties are being prioritized.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-judgement on the quality of interpretation.
We note that: (1) humans cannot judge if an interpretation is faithful or not: if they understood the model, interpretation would be unnecessary; (2) for similar reasons, we cannot obtain supervision for this problem, either. Therefore, human judgement should not be involved in evaluation for faithfulness, as human judgement measures plausability.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-provided gold labels.
We should be able to interpret incorrect model predictions, just the same as correct ones. Evaluation methods that rely on gold labels are influenced by human priors on what should the model do, and again push the evaluation in the direction of plausability.
Guidelines for Evaluating Faithfulness ::: Do not trust “inherent interpretability” claims.
Inherent interpretability is a claim until proven otherwise. Explanations provided by “inherently interpretable” models must be held to the same standards as post-hoc interpretation methods, and be evaluated for faithfulness using the same set of evaluation techniques.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation of IUI systems should not rely on user performance.
End-task user performance in HCI settings is merely indicative of correlation between plausibility and model performance, however small this correlation is. While important to evaluate the utility of the interpretations for some use-cases, it is unrelated to faithfulness.
Defining Faithfulness
What does it mean for an interpretation method to be faithful? Intuitively, we would like the provided interpretation to reflect the true reasoning process of the model when making a decision. But what is a reasoning process of a model, and how can reasoning processes be compared to each other?
Lacking a standard definition, different works evaluate their methods by introducing tests to measure properties that they believe good interpretations should satisfy. Some of these tests measure aspects of faithfulness. These ad-hoc definitions are often unique to each paper and inconsistent with each other, making it hard to find commonalities.
We uncover three assumptions that underlie all these methods, enabling us to organize the literature along standardized axes, and relate seemingly distinct lines of work. Moreover, exposing the underlying assumptions enables an informed discussion regarding their validity and merit (we leave such a discussion for future work, by us or others).
These assumptions, to our knowledge, encapsulate the current working definitions of faithfulness used by the research community.
Defining Faithfulness ::: Assumption 1 (The Model Assumption).
Two models will make the same predictions if and only if they use the same reasoning process.
Corollary 1.1. An interpretation system is unfaithful if it results in different interpretations of models that make the same decisions.
As demonstrated by a recent example concerning NLP models, it can be used for proof by counter-example. Theoretically, if all possible models which can perfectly mimic the model's decisions also provide the same interpretations, then they could be deemed faithful. Conversely, showing that two models provide the same results but different interpretations, disprove the faithfulness of the method. BIBREF18 show how these counter-examples can be derived with adversarial training of models which can mimic the original model, yet provide different explanations.
Corollary 1.2. An interpretation is unfaithful if it results in different decisions than the model it interprets.
A more direct application of the Model Assumption is via the notion of fidelity BIBREF15, BIBREF8. For cases in which the explanation is itself a model capable of making decisions (e.g., decision trees or rule lists BIBREF36), fidelity is defined as the degree to which the explanation model can mimic the original model's decisions (as an accuracy score). For cases where the explanation is not a computable model, BIBREF37 propose a simple way of mapping explanations to decisions via crowd-sourcing, by asking humans to simulate the model's decision without any access to the model, and only access to the input and explanation (termed forward simulation). This idea is further explored and used in practice by BIBREF38.
Defining Faithfulness ::: Assumption 2 (The Prediction Assumption).
On similar inputs, the model makes similar decisions if and only if its reasoning is similar.
Corollary 2. An interpretation system is unfaithful if it provides different interpretations for similar inputs and outputs.
Since the interpretation serves as a proxy for the model's “reasoning”, it should satisfy the same constraints. In other words, interpretations of similar decisions should be similar, and interpretations of dissimilar decisions should be dissimilar.
This assumption is more useful to disprove the faithfulness of an interpretation rather than prove it, since a disproof requires finding appropriate cases where the assumption doesn't hold, where a proof would require checking a (very large) satisfactory quantity of examples, or even the entire input space.
One recent discussion in the NLP community BIBREF33, BIBREF18 concerns the use of this underlying assumption for evaluating attention heat-maps as explanations. The former attempts to provide different explanations of similar decisions per instance. The latter critiques the former and is based more heavily on the model assumption, described above.
Additionally, BIBREF39 propose to introduce a constant shift to the input space, and evaluate whether the explanation changes significantly as the final decision stays the same. BIBREF16 formalize a generalization of this technique under the term interpretability robustness: interpretations should be invariant to small perturbations in the input (a direct consequence of the prediction assumption). BIBREF40 further expand on this notion as “consistency of the explanation with respect to the model”. Unfortunately, robustness measures are difficult to apply in NLP settings due to the discrete input.
Defining Faithfulness ::: Assumption 3 (The Linearity Assumption).
Certain parts of the input are more important to the model reasoning than others. Moreover, the contributions of different parts of the input are independent from each other.
Corollary 3. Under certain circumstances, heat-map interpretations can be faithful.
This assumption is employed by methods that consider heat-maps (e.g., attention maps) over the input as explanations, particularly popular in NLP. Heat-maps are claims about which parts of the input are more relevant than others to the model's decision. As such, we can design “stress tests” to verify whether they uphold their claims.
One method proposed to do so is erasure, where the “most relevant” parts of the input—according to the explanation—are erased from the input, in expectation that the model's decision will change BIBREF25, BIBREF42, BIBREF32. Otherwise, the “least relevant” parts of the input may be erased, in expectation that the model's decision will not change BIBREF43. BIBREF44, BIBREF45 propose two measures of comprehensiveness and sufficiency as a formal generalization of erasure: as the degree by which the model is influenced by the removal of the high-ranking features, or by inclusion of solely the high-ranking features.
Is Faithful Interpretation Impossible?
The aforementioned assumptions are currently utilized to evaluate faithfulness in a binary manner, whether an interpretation is strictly faithful or not. Specifically, they are most often used to show that a method is not faithful, by constructing cases in which the assumptions do not hold for the suggested method. In other words, there is a clear trend of proof via counter-example, for various interpretation methods, that they are not globally faithful.
We claim that this is unproductive, as we expect these various methods to consistently result in negative (not faithful) results, continuing the current trend. This follows because an interpretation functions as an approximation of the model or decision's true reasoning process, so it by definition loses information. By the pigeonhole principle, there will be inputs with deviation between interpretation and reasoning.
This is observed in practice, in numerous work that show adversarial behavior, or pathological behaviours, that arise from the deeply non-linear and high-dimensional decision boundaries of current models. Furthermore, because we lack supervision regarding which models or decisions are indeed mappable to human-readable concepts, we cannot ignore the approximation errors.
This poses a high bar for explanation methods to fulfill, a bar which we estimate will not be overcome soon, if at all. What should we do, then, if we desire a system that provides faithful explanations?
Towards Better Faithfulness Criteria
We argue that a way out of this standstill is in a more practical and nuanced methodology for defining and evaluating faithfulness. We propose the following challenge to the community: We must develop formal definition and evaluation for faithfulness that allows us the freedom to say when a method is sufficiently faithful to be useful in practice.
We note two possible approaches to this end:
Across models and tasks: The degree (as grayscale) of faithfulness at the level of specific models and tasks. Perhaps some models or tasks allow sufficiently faithful interpretation, even if the same is not true for others.
For example, the method may not be faithful for some question-answering task, but faithful for movie review sentiment, perhaps based on various syntactic and semantic attributes of those tasks.
Across input space: The degree of faithfulness at the level of subspaces of the input space, such as neighborhoods of similar inputs, or singular inputs themselves. If we are able to say with some degree of confidence whether a specific decision's explanation is faithful to the model, even if the interpretation method is not considered universally faithful, it can be used with respect to those specific areas or instances only.
Conclusion
The opinion proposed in this paper is two-fold:
First, interpretability evaluation often conflates evaluating faithfulness and plausibility together. We should tease apart the two definitions and focus solely on evaluating faithfulness without any supervision or influence of the convincing power of the interpretation.
Second, faithfulness is often evaluated in a binary “faithful or not faithful” manner, and we believe strictly faithful interpretation is a “unicorn” which will likely never be found. We should instead evaluate faithfulness on a more nuanced “grayscale” that allows interpretations to be useful even if they are not globally and definitively faithful.
Acknowledgements
We thank Yanai Elazar for welcome input on the presentation and organization of the paper. We also thank the reviewers for additional feedback and pointing to relevant literature in HCI and IUI.
This project has received funding from the Europoean Research Council (ERC) under the Europoean Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT). | Two models will make the same predictions if and only if they use the same reasoning process., On similar inputs, the model makes similar decisions if and only if its reasoning is similar., Certain parts of the input are more important to the model reasoning than others. Moreover, the contributions of different parts of the input are independent from each other. |
2a859e80d8647923181cb2d8f9a2c67b1c3f4608 | 2a859e80d8647923181cb2d8f9a2c67b1c3f4608_0 | Q: Which are key points in guidelines for faithfulness evaluation?
Text: Introduction
Fueled by recent advances in deep-learning and language processing, NLP systems are increasingly being used for prediction and decision-making in many fields BIBREF0, including sensitive ones such as health, commerce and law BIBREF1. Unfortunately, these highly flexible and highly effective neural models are also opaque. There is therefore a critical need for explaining learning-based models' decisions.
The emerging research topic of interpretability or explainability has grown rapidly in recent years. Unfortunately, not without growing pains.
One such pain is the challenge of defining—and evaluating—what constitutes a quality interpretation. Current approaches define interpretation in a rather ad-hoc manner, motivated by practical use-cases and applications. However, this view often fails to distinguish between distinct aspects of the interpretation's quality, such as readability, plausibility and faithfulness BIBREF2. We argue (§SECREF2, §SECREF5) such conflation is harmful, and that faithfulness should be defined and evaluated explicitly, and independently from plausibility.
Our main focus is the evaluation of the faithfulness of an explanation. Intuitively, a faithful interpretation is one that accurately represents the reasoning process behind the model's prediction. We find this to be a pressing issue in explainability: in cases where an explanation is required to be faithful, imperfect or misleading evaluation can have disastrous effects.
While literature in this area may implicitly or explicitly evaluate faithfulness for specific explanation techniques, there is no consistent and formal definition of faithfulness. We uncover three assumptions that underlie all these attempts. By making the assumptions explicit and organizing the literature around them, we “connect the dots” between seemingly distinct evaluation methods, and also provide a basis for discussion regarding the desirable properties of faithfulness (§SECREF6).
Finally, we observe a trend by which faithfulness is treated as a binary property, followed by showing that an interpretation method is not faithful. We claim that this is unproductive (§SECREF7), as the assumptions are nearly impossible to satisfy fully, and it is all too easy to disprove the faithfulness of an interpretation method via a counter-example. What can be done? We argue for a more practical view of faithfulness, calling for a graded criteria that measures the extent and likelihood of an interpretation to be faithful, in practice (§SECREF8). While we started to work in this area, we pose the exact formalization of these criteria, and concrete evaluations methods for them, as a central challenge to the community for the coming future.
Faithfulness vs. Plausibility
There is considerable research effort in attempting to define and categorize the desiderata of a learned system's interpretation, most of which revolves around specific use-cases BIBREF17, BIBREF15.
Two particularly notable criteria, each useful for a different purposes, are plausibility and faithfulness. “Plausibility” refers to how convincing the interpretation is to humans, while “faithfulness” refers to how accurately it reflects the true reasoning process of the model BIBREF2, BIBREF18.
Naturally, it is possible to satisfy one of these properties without the other. For example, consider the case of interpretation via post-hoc text generation—where an additional “generator” component outputs a textual explanation of the model's decision, and the generator is learned with supervision of textual explanations BIBREF19, BIBREF20, BIBREF21. In this case, plausibility is the dominating property, while there is no faithfulness guarantee.
Despite the difference between the two criteria, many authors do not clearly make the distinction, and sometimes conflate the two. Moreoever, the majority of works do not explicitly name the criteria under consideration, even when they clearly belong to one camp or the other.
We argue that this conflation is dangerous. For example, consider the case of recidivism prediction, where a judge is exposed to a model's prediction and its interpretation, and the judge believes the interpretation to reflect the model's reasoning process. Since the interpretation's faithfulness carries legal consequences, a plausible but unfaithful interpretation may be the worst-case scenario. The lack of explicit claims by research may cause misinformation to potential users of the technology, who are not versed in its inner workings. Therefore, clear distinction between these terms is critical.
Inherently Interpretable?
A distinction is often made between two methods of achieving interpretability: (1) interpreting existing models via post-hoc techniques; and (2) designing inherently interpretable models. BIBREF29 argues in favor of inherently interpretable models, which by design claim to provide more faithful interpretations than post-hoc interpretation of black-box models.
We warn against taking this argumentation at face-value: a method being “inherently interpretable” is merely a claim that needs to be verified before it can be trusted. Indeed, while attention mechanisms have been considered as “inherently interpretable” BIBREF30, BIBREF31, recent work cast doubt regarding their faithfulness BIBREF32, BIBREF33, BIBREF18.
Evaluation via Utility
While explanations have many different use-cases, such as model debugging, lawful guarantees or health-critical guarantees, one other possible use-case with particularly prominent evaluation literature is Intelligent User Interfaces (IUI), via Human-Computer Interaction (HCI), of automatic models assisting human decision-makers. In this case, the goal of the explanation is to increase the degree of trust between the user and the system, giving the user more nuance towards whether the system's decision is likely correct, or not. In the general case, the final evaluation metric is the performance of the user at their task BIBREF34. For example, BIBREF35 evaluate various explanations of a model in a setting of trivia question answering.
However, in the context of faithfulness, we must warn against HCI-inspired evaluation, as well: increased performance in this setting is not indicative of faithfulness; rather, it is indicative of correlation between the plausibility of the explanations and the model's performance.
To illustrate, consider the following fictional case of a non-faithful explanation system, in an HCI evaluation setting: the explanation given is a heat-map of the textual input, attributing scores to various tokens. Assume the system explanations behave in the following way: when the output is correct, the explanation consists of random content words; and when the output is incorrect, it consists of random punctuation marks. In other words, the explanation is more likely to appear plausible when the model is correct, while at the same time not reflecting the true decision process of the model. The user, convinced by the nicer-looking explanations, performs better using this system. However, the explanation consistently claimed random tokens to be highly relevant to the model's reasoning process. While the system is concretely useful, the claims given by the explanation do not reflect the model's decisions whatsoever (by design).
While the above scenario is extreme, this misunderstanding is not entirely unlikely, since any degree of correlation between plausibility and model performance will result in increased user performance, regardless of any notion of faithfulness.
Guidelines for Evaluating Faithfulness
We propose the following guidelines for evaluating the faithfulness of explanations. These guidelines address common pitfalls and sub-optimal practices we observed in the literature.
Guidelines for Evaluating Faithfulness ::: Be explicit in what you evaluate.
Conflating plausability and faithfulness is harmful. You should be explicit on which one of them you evaluate, and use suitable methodologies for each one. Of course, the same applies when designing interpretation techniques—be clear about which properties are being prioritized.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-judgement on the quality of interpretation.
We note that: (1) humans cannot judge if an interpretation is faithful or not: if they understood the model, interpretation would be unnecessary; (2) for similar reasons, we cannot obtain supervision for this problem, either. Therefore, human judgement should not be involved in evaluation for faithfulness, as human judgement measures plausability.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-provided gold labels.
We should be able to interpret incorrect model predictions, just the same as correct ones. Evaluation methods that rely on gold labels are influenced by human priors on what should the model do, and again push the evaluation in the direction of plausability.
Guidelines for Evaluating Faithfulness ::: Do not trust “inherent interpretability” claims.
Inherent interpretability is a claim until proven otherwise. Explanations provided by “inherently interpretable” models must be held to the same standards as post-hoc interpretation methods, and be evaluated for faithfulness using the same set of evaluation techniques.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation of IUI systems should not rely on user performance.
End-task user performance in HCI settings is merely indicative of correlation between plausibility and model performance, however small this correlation is. While important to evaluate the utility of the interpretations for some use-cases, it is unrelated to faithfulness.
Defining Faithfulness
What does it mean for an interpretation method to be faithful? Intuitively, we would like the provided interpretation to reflect the true reasoning process of the model when making a decision. But what is a reasoning process of a model, and how can reasoning processes be compared to each other?
Lacking a standard definition, different works evaluate their methods by introducing tests to measure properties that they believe good interpretations should satisfy. Some of these tests measure aspects of faithfulness. These ad-hoc definitions are often unique to each paper and inconsistent with each other, making it hard to find commonalities.
We uncover three assumptions that underlie all these methods, enabling us to organize the literature along standardized axes, and relate seemingly distinct lines of work. Moreover, exposing the underlying assumptions enables an informed discussion regarding their validity and merit (we leave such a discussion for future work, by us or others).
These assumptions, to our knowledge, encapsulate the current working definitions of faithfulness used by the research community.
Defining Faithfulness ::: Assumption 1 (The Model Assumption).
Two models will make the same predictions if and only if they use the same reasoning process.
Corollary 1.1. An interpretation system is unfaithful if it results in different interpretations of models that make the same decisions.
As demonstrated by a recent example concerning NLP models, it can be used for proof by counter-example. Theoretically, if all possible models which can perfectly mimic the model's decisions also provide the same interpretations, then they could be deemed faithful. Conversely, showing that two models provide the same results but different interpretations, disprove the faithfulness of the method. BIBREF18 show how these counter-examples can be derived with adversarial training of models which can mimic the original model, yet provide different explanations.
Corollary 1.2. An interpretation is unfaithful if it results in different decisions than the model it interprets.
A more direct application of the Model Assumption is via the notion of fidelity BIBREF15, BIBREF8. For cases in which the explanation is itself a model capable of making decisions (e.g., decision trees or rule lists BIBREF36), fidelity is defined as the degree to which the explanation model can mimic the original model's decisions (as an accuracy score). For cases where the explanation is not a computable model, BIBREF37 propose a simple way of mapping explanations to decisions via crowd-sourcing, by asking humans to simulate the model's decision without any access to the model, and only access to the input and explanation (termed forward simulation). This idea is further explored and used in practice by BIBREF38.
Defining Faithfulness ::: Assumption 2 (The Prediction Assumption).
On similar inputs, the model makes similar decisions if and only if its reasoning is similar.
Corollary 2. An interpretation system is unfaithful if it provides different interpretations for similar inputs and outputs.
Since the interpretation serves as a proxy for the model's “reasoning”, it should satisfy the same constraints. In other words, interpretations of similar decisions should be similar, and interpretations of dissimilar decisions should be dissimilar.
This assumption is more useful to disprove the faithfulness of an interpretation rather than prove it, since a disproof requires finding appropriate cases where the assumption doesn't hold, where a proof would require checking a (very large) satisfactory quantity of examples, or even the entire input space.
One recent discussion in the NLP community BIBREF33, BIBREF18 concerns the use of this underlying assumption for evaluating attention heat-maps as explanations. The former attempts to provide different explanations of similar decisions per instance. The latter critiques the former and is based more heavily on the model assumption, described above.
Additionally, BIBREF39 propose to introduce a constant shift to the input space, and evaluate whether the explanation changes significantly as the final decision stays the same. BIBREF16 formalize a generalization of this technique under the term interpretability robustness: interpretations should be invariant to small perturbations in the input (a direct consequence of the prediction assumption). BIBREF40 further expand on this notion as “consistency of the explanation with respect to the model”. Unfortunately, robustness measures are difficult to apply in NLP settings due to the discrete input.
Defining Faithfulness ::: Assumption 3 (The Linearity Assumption).
Certain parts of the input are more important to the model reasoning than others. Moreover, the contributions of different parts of the input are independent from each other.
Corollary 3. Under certain circumstances, heat-map interpretations can be faithful.
This assumption is employed by methods that consider heat-maps (e.g., attention maps) over the input as explanations, particularly popular in NLP. Heat-maps are claims about which parts of the input are more relevant than others to the model's decision. As such, we can design “stress tests” to verify whether they uphold their claims.
One method proposed to do so is erasure, where the “most relevant” parts of the input—according to the explanation—are erased from the input, in expectation that the model's decision will change BIBREF25, BIBREF42, BIBREF32. Otherwise, the “least relevant” parts of the input may be erased, in expectation that the model's decision will not change BIBREF43. BIBREF44, BIBREF45 propose two measures of comprehensiveness and sufficiency as a formal generalization of erasure: as the degree by which the model is influenced by the removal of the high-ranking features, or by inclusion of solely the high-ranking features.
Is Faithful Interpretation Impossible?
The aforementioned assumptions are currently utilized to evaluate faithfulness in a binary manner, whether an interpretation is strictly faithful or not. Specifically, they are most often used to show that a method is not faithful, by constructing cases in which the assumptions do not hold for the suggested method. In other words, there is a clear trend of proof via counter-example, for various interpretation methods, that they are not globally faithful.
We claim that this is unproductive, as we expect these various methods to consistently result in negative (not faithful) results, continuing the current trend. This follows because an interpretation functions as an approximation of the model or decision's true reasoning process, so it by definition loses information. By the pigeonhole principle, there will be inputs with deviation between interpretation and reasoning.
This is observed in practice, in numerous work that show adversarial behavior, or pathological behaviours, that arise from the deeply non-linear and high-dimensional decision boundaries of current models. Furthermore, because we lack supervision regarding which models or decisions are indeed mappable to human-readable concepts, we cannot ignore the approximation errors.
This poses a high bar for explanation methods to fulfill, a bar which we estimate will not be overcome soon, if at all. What should we do, then, if we desire a system that provides faithful explanations?
Towards Better Faithfulness Criteria
We argue that a way out of this standstill is in a more practical and nuanced methodology for defining and evaluating faithfulness. We propose the following challenge to the community: We must develop formal definition and evaluation for faithfulness that allows us the freedom to say when a method is sufficiently faithful to be useful in practice.
We note two possible approaches to this end:
Across models and tasks: The degree (as grayscale) of faithfulness at the level of specific models and tasks. Perhaps some models or tasks allow sufficiently faithful interpretation, even if the same is not true for others.
For example, the method may not be faithful for some question-answering task, but faithful for movie review sentiment, perhaps based on various syntactic and semantic attributes of those tasks.
Across input space: The degree of faithfulness at the level of subspaces of the input space, such as neighborhoods of similar inputs, or singular inputs themselves. If we are able to say with some degree of confidence whether a specific decision's explanation is faithful to the model, even if the interpretation method is not considered universally faithful, it can be used with respect to those specific areas or instances only.
Conclusion
The opinion proposed in this paper is two-fold:
First, interpretability evaluation often conflates evaluating faithfulness and plausibility together. We should tease apart the two definitions and focus solely on evaluating faithfulness without any supervision or influence of the convincing power of the interpretation.
Second, faithfulness is often evaluated in a binary “faithful or not faithful” manner, and we believe strictly faithful interpretation is a “unicorn” which will likely never be found. We should instead evaluate faithfulness on a more nuanced “grayscale” that allows interpretations to be useful even if they are not globally and definitively faithful.
Acknowledgements
We thank Yanai Elazar for welcome input on the presentation and organization of the paper. We also thank the reviewers for additional feedback and pointing to relevant literature in HCI and IUI.
This project has received funding from the Europoean Research Council (ERC) under the Europoean Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT). | Be explicit in what you evaluate., Faithfulness evaluation should not involve human-judgement on the quality of interpretation., Faithfulness evaluation should not involve human-provided gold labels., Do not trust “inherent interpretability” claims., Faithfulness evaluation of IUI systems should not rely on user performance. |
aceac4ad16ffe1af0f01b465919b1d4422941a6b | aceac4ad16ffe1af0f01b465919b1d4422941a6b_0 | Q: Did they use the state-of-the-art model to analyze the attention?
Text: Introduction
Deep learning has achieved tremendous success for many NLP tasks. However, unlike traditional methods that provide optimized weights for human understandable features, the behavior of deep learning models is much harder to interpret. Due to the high dimensionality of word embeddings, and the complex, typically recurrent architectures used for textual data, it is often unclear how and why a deep learning model reaches its decisions.
There are a few attempts toward explaining/interpreting deep learning-based models, mostly by visualizing the representation of words and/or hidden states, and their importances (via saliency or erasure) on shallow tasks like sentiment analysis and POS tagging BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . In contrast, we focus on interpreting the gating and attention signals of the intermediate layers of deep models in the challenging task of Natural Language Inference. A key concept in explaining deep models is saliency, which determines what is critical for the final decision of a deep model. So far, saliency has only been used to illustrate the impact of word embeddings. In this paper, we extend this concept to the intermediate layer of deep models to examine the saliency of attention as well as the LSTM gating signals to understand the behavior of these components and their impact on the final decision.
We make two main contributions. First, we introduce new strategies for interpreting the behavior of deep models in their intermediate layers, specifically, by examining the saliency of the attention and the gating signals. Second, we provide an extensive analysis of the state-of-the-art model for the NLI task and show that our methods reveal interesting insights not available from traditional methods of inspecting attention and word saliency.
In this paper, our focus was on NLI, which is a fundamental NLP task that requires both understanding and reasoning. Furthermore, the state-of-the-art NLI models employ complex neural architectures involving key mechanisms, such as attention and repeated reading, widely seen in successful models for other NLP tasks. As such, we expect our methods to be potentially useful for other natural understanding tasks as well.
Task and Model
In NLI BIBREF4 , we are given two sentences, a premise and a hypothesis, the goal is to decide the logical relationship (Entailment, Neutral, or Contradiction) between them.
Many of the top performing NLI models BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , are variants of the ESIM model BIBREF11 , which we choose to analyze in this paper. ESIM reads the sentences independently using LSTM at first, and then applies attention to align/contrast the sentences. Another round of LSTM reading then produces the final representations, which are compared to make the prediction. Detailed description of ESIM can be found in the Appendix.
Using the SNLI BIBREF4 data, we train two variants of ESIM, with dimensionality 50 and 300 respectively, referred to as ESIM-50 and ESIM-300 in the remainder of the paper.
Visualization of Attention and Gating
In this work, we are primarily interested in the internal workings of the NLI model. In particular, we focus on the attention and the gating signals of LSTM readers, and how they contribute to the decisions of the model.
Attention
Attention has been widely used in many NLP tasks BIBREF12 , BIBREF13 , BIBREF14 and is probably one of the most critical parts that affects the inference decisions. Several pieces of prior work in NLI have attempted to visualize the attention layer to provide some understanding of their models BIBREF5 , BIBREF15 . Such visualizations generate a heatmap representing the similarity between the hidden states of the premise and the hypothesis (Eq. 19 of Appendix). Unfortunately the similarities are often the same regardless of the decision.
Let us consider the following example, where the same premise “A kid is playing in the garden”, is paired with three different hypotheses:
A kid is taking a nap in the garden
A kid is having fun in the garden with her family
A kid is having fun in the garden
Note that the ground truth relationships are Contradiction, Neutral, and Entailment, respectively.
The first row of Fig. 1 shows the visualization of normalized attention for the three cases produced by ESIM-50, which makes correct predictions for all of them. As we can see from the figure, the three attention maps are fairly similar despite the completely different decisions. The key issue is that the attention visualization only allows us to see how the model aligns the premise with the hypothesis, but does not show how such alignment impacts the decision. This prompts us to consider the saliency of attention.
The concept of saliency was first introduced in vision for visualizing the spatial support on an image for a particular object class BIBREF16 . In NLP, saliency has been used to study the importance of words toward a final decision BIBREF0 .
We propose to examine the saliency of attention. Specifically, given a premise-hypothesis pair and the model's decision $y$ , we consider the similarity between a pair of premise and hypothesis hidden states $e_{ij}$ as a variable. The score of the decision $S(y)$ is thus a function of $e_{ij}$ for all $i$ and $j$ . The saliency of $e_{ij}$ is then defined to be $|\frac{\partial S(y)}{\partial {e_{ij}}}|$ .
The second row of Fig. 1 presents the attention saliency map for the three examples acquired by the same ESIM-50 model. Interestingly, the saliencies are clearly different across the examples, each highlighting different parts of the alignment. Specifically, for h1, we see the alignment between “is playing” and “taking a nap” and the alignment of “in a garden” to have the most prominent saliency toward the decision of Contradiction. For h2, the alignment of “kid” and “her family” seems to be the most salient for the decision of Neutral. Finally, for h3, the alignment between “is having fun” and “kid is playing” have the strongest impact toward the decision of Entailment.
From this example, we can see that by inspecting the attention saliency, we effectively pinpoint which part of the alignments contribute most critically to the final prediction whereas simply visualizing the attention itself reveals little information.
In the previous examples, we study the behavior of the same model on different inputs. Now we use the attention saliency to compare the two different ESIM models: ESIM-50 and ESIM-300.
Consider two examples with a shared hypothesis of “A man ordered a book” and premise:
John ordered a book from amazon
Mary ordered a book from amazon
Here ESIM-50 fails to capture the gender connections of the two different names and predicts Neutral for both inputs, whereas ESIM-300 correctly predicts Entailment for the first case and Contradiction for the second.
In the first two columns of Fig. 2 (column a and b) we visualize the attention of the two examples for ESIM-50 (left) and ESIM-300 (right) respectively. Although the two models make different predictions, their attention maps appear qualitatively similar.
In contrast, columns 3-4 of Fig. 2 (column c and d) present the attention saliency for the two examples by ESIM-50 and ESIM-300 respectively. We see that for both examples, ESIM-50 primarily focused on the alignment of “ordered”, whereas ESIM-300 focused more on the alignment of “John” and “Mary” with “man”. It is interesting to note that ESIM-300 does not appear to learn significantly different similarity values compared to ESIM-50 for the two critical pairs of words (“John”, “man”) and (“Mary”, “man”) based on the attention map. The saliency map, however, reveals that the two models use these values quite differently, with only ESIM-300 correctly focusing on them.
LSTM Gating Signals
LSTM gating signals determine the flow of information. In other words, they indicate how LSTM reads the word sequences and how the information from different parts is captured and combined. LSTM gating signals are rarely analyzed, possibly due to their high dimensionality and complexity. In this work, we consider both the gating signals and their saliency, which is computed as the partial derivative of the score of the final decision with respect to each gating signal.
Instead of considering individual dimensions of the gating signals, we aggregate them to consider their norm, both for the signal and for its saliency. Note that ESIM models have two LSTM layers, the first (input) LSTM performs the input encoding and the second (inference) LSTM generates the representation for inference.
In Fig. 3 we plot the normalized signal and saliency norms for different gates (input, forget, output) of the Forward input (bottom three rows) and inference (top three rows) LSTMs. These results are produced by the ESIM-50 model for the three examples of Section 3.1, one for each column.
From the figure, we first note that the saliency tends to be somewhat consistent across different gates within the same LSTM, suggesting that we can interpret them jointly to identify parts of the sentence important for the model's prediction.
Comparing across examples, we see that the saliency curves show pronounced differences across the examples. For instance, the saliency pattern of the Neutral example is significantly different from the other two examples, and heavily concentrated toward the end of the sentence (“with her family”). Note that without this part of the sentence, the relationship would have been Entailment. The focus (evidenced by its strong saliency and strong gating signal) on this particular part, which presents information not available from the premise, explains the model's decision of Neutral.
Comparing the behavior of the input LSTM and the inference LSTM, we observe interesting shifts of focus. In particular, we see that the inference LSTM tends to see much more concentrated saliency over key parts of the sentence, whereas the input LSTM sees more spread of saliency. For example, for the Contradiction example, the input LSTM sees high saliency for both “taking” and “in”, whereas the inference LSTM primarily focuses on “nap”, which is the key word suggesting a Contradiction. Note that ESIM uses attention between the input and inference LSTM layers to align/contrast the sentences, hence it makes sense that the inference LSTM is more focused on the critical differences between the sentences. This is also observed for the Neutral example as well.
It is worth noting that, while revealing similar general trends, the backward LSTM can sometimes focus on different parts of the sentence (e.g., see Fig. 11 of Appendix), suggesting the forward and backward readings provide complementary understanding of the sentence.
Conclusion
We propose new visualization and interpretation strategies for neural models to understand how and why they work. We demonstrate the effectiveness of the proposed strategies on a complex task (NLI). Our strategies are able to provide interesting insights not achievable by previous explanation techniques. Our future work will extend our study to consider other NLP tasks and models with the goal of producing useful insights for further improving these models. Model In this section we describe the ESIM model. We divide ESIM to three main parts: 1) input encoding, 2) attention, and 3) inference. Figure 4 demonstrates a high-level view of the ESIM framework. Let $u=[u_1, \cdots , u_n]$ and $v=[v_1, \cdots , v_m]$ be the given premise with length $n$ and hypothesis with length $m$ respectively, where $u_i, v_j \in \mathbb {R}^r$ are word embeddings of $r$ -dimensional vector. The goal is to predict a label $y$ that indicates the logical relationship between premise $u$ and hypothesis $v$ . Below we briefly explain the aforementioned parts. Input Encoding It utilizes a bidirectional LSTM (BiLSTM) for encoding the given premise and hypothesis using Equations 16 and 17 respectively.
$$\hat{u} \in \mathbb {R}^{n \times 2d}$$ (Eq. )
$$\hat{v} \in \mathbb {R}^{m \times 2d}$$ (Eq. ) where $u$ and $v=[v_1, \cdots , v_m]$0 are the reading sequences of $v=[v_1, \cdots , v_m]$1 and $v=[v_1, \cdots , v_m]$2 respectively. Attention It employs a soft alignment method to associate the relevant sub-components between the given premise and hypothesis. Equation 19 (energy function) computes the unnormalized attention weights as the similarity of hidden states of the premise and hypothesis.
$$u$$ (Eq. ) where $v=[v_1, \cdots , v_m]$3 and $v=[v_1, \cdots , v_m]$4 are the hidden representations of $v=[v_1, \cdots , v_m]$5 and $v=[v_1, \cdots , v_m]$6 respectively which are computed earlier in Equations 16 and 17 . Next, for each word in either premise or hypothesis, the relevant semantics in the other sentence is extracted and composed according to $v=[v_1, \cdots , v_m]$7 . Equations 20 and 21 provide formal and specific details of this procedure.
$$\tilde{v}_j$$ (Eq. )
$$\hat{u}$$ (Eq. ) where $v=[v_1, \cdots , v_m]$8 represents the extracted relevant information of $v=[v_1, \cdots , v_m]$9 by attending to $n$0 while $n$1 represents the extracted relevant information of $n$2 by attending to $n$3 . Next, it passes the enriched information through a projector layer which produce the final output of attention stage. Equations 22 and 23 formally represent this process.
$$p$$ (Eq. )
$$q$$ (Eq. ) Here $n$4 stands for element-wise product while $n$5 and $n$6 are the trainable weights and biases of the projector layer respectively. $n$7 and $n$8 indicate the output of attention devision for premise and hypothesis respectively. Inference During this phase, it uses another BiLSTM to aggregate the two sequences of computed matching vectors, $n$9 and $m$0 from the attention stage (Equations 27 and 28 ).
$$\emph {softmax}$$ (Eq. )
$$\hat{u} = \textit {BiLSTM}(u)$$ (Eq. 16) where $m$1 and $m$2 are the reading sequences of $m$3 and $m$4 respectively. Finally the concatenation max and average pooling of $m$5 and $m$6 are pass through a multilayer perceptron (MLP) classifier that includes a hidden layer with $m$7 activation and $m$8 output layer. The model is trained in an end-to-end manner. Attention Study Here we provide more examples on the NLI task which intend to examine specific behavior in this model. Such examples indicate interesting observation that we can analyze them in the future works. Table 1 shows the list of all example. LSTM Gating Signal Finally, Figure 11 depicts the backward LSTM gating signals study. | we provide an extensive analysis of the state-of-the-art model |
f7070b2e258beac9b09514be2bfcc5a528cc3a0e | f7070b2e258beac9b09514be2bfcc5a528cc3a0e_0 | Q: What is the performance of their model?
Text: Introduction
Deep learning has achieved tremendous success for many NLP tasks. However, unlike traditional methods that provide optimized weights for human understandable features, the behavior of deep learning models is much harder to interpret. Due to the high dimensionality of word embeddings, and the complex, typically recurrent architectures used for textual data, it is often unclear how and why a deep learning model reaches its decisions.
There are a few attempts toward explaining/interpreting deep learning-based models, mostly by visualizing the representation of words and/or hidden states, and their importances (via saliency or erasure) on shallow tasks like sentiment analysis and POS tagging BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . In contrast, we focus on interpreting the gating and attention signals of the intermediate layers of deep models in the challenging task of Natural Language Inference. A key concept in explaining deep models is saliency, which determines what is critical for the final decision of a deep model. So far, saliency has only been used to illustrate the impact of word embeddings. In this paper, we extend this concept to the intermediate layer of deep models to examine the saliency of attention as well as the LSTM gating signals to understand the behavior of these components and their impact on the final decision.
We make two main contributions. First, we introduce new strategies for interpreting the behavior of deep models in their intermediate layers, specifically, by examining the saliency of the attention and the gating signals. Second, we provide an extensive analysis of the state-of-the-art model for the NLI task and show that our methods reveal interesting insights not available from traditional methods of inspecting attention and word saliency.
In this paper, our focus was on NLI, which is a fundamental NLP task that requires both understanding and reasoning. Furthermore, the state-of-the-art NLI models employ complex neural architectures involving key mechanisms, such as attention and repeated reading, widely seen in successful models for other NLP tasks. As such, we expect our methods to be potentially useful for other natural understanding tasks as well.
Task and Model
In NLI BIBREF4 , we are given two sentences, a premise and a hypothesis, the goal is to decide the logical relationship (Entailment, Neutral, or Contradiction) between them.
Many of the top performing NLI models BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , are variants of the ESIM model BIBREF11 , which we choose to analyze in this paper. ESIM reads the sentences independently using LSTM at first, and then applies attention to align/contrast the sentences. Another round of LSTM reading then produces the final representations, which are compared to make the prediction. Detailed description of ESIM can be found in the Appendix.
Using the SNLI BIBREF4 data, we train two variants of ESIM, with dimensionality 50 and 300 respectively, referred to as ESIM-50 and ESIM-300 in the remainder of the paper.
Visualization of Attention and Gating
In this work, we are primarily interested in the internal workings of the NLI model. In particular, we focus on the attention and the gating signals of LSTM readers, and how they contribute to the decisions of the model.
Attention
Attention has been widely used in many NLP tasks BIBREF12 , BIBREF13 , BIBREF14 and is probably one of the most critical parts that affects the inference decisions. Several pieces of prior work in NLI have attempted to visualize the attention layer to provide some understanding of their models BIBREF5 , BIBREF15 . Such visualizations generate a heatmap representing the similarity between the hidden states of the premise and the hypothesis (Eq. 19 of Appendix). Unfortunately the similarities are often the same regardless of the decision.
Let us consider the following example, where the same premise “A kid is playing in the garden”, is paired with three different hypotheses:
A kid is taking a nap in the garden
A kid is having fun in the garden with her family
A kid is having fun in the garden
Note that the ground truth relationships are Contradiction, Neutral, and Entailment, respectively.
The first row of Fig. 1 shows the visualization of normalized attention for the three cases produced by ESIM-50, which makes correct predictions for all of them. As we can see from the figure, the three attention maps are fairly similar despite the completely different decisions. The key issue is that the attention visualization only allows us to see how the model aligns the premise with the hypothesis, but does not show how such alignment impacts the decision. This prompts us to consider the saliency of attention.
The concept of saliency was first introduced in vision for visualizing the spatial support on an image for a particular object class BIBREF16 . In NLP, saliency has been used to study the importance of words toward a final decision BIBREF0 .
We propose to examine the saliency of attention. Specifically, given a premise-hypothesis pair and the model's decision $y$ , we consider the similarity between a pair of premise and hypothesis hidden states $e_{ij}$ as a variable. The score of the decision $S(y)$ is thus a function of $e_{ij}$ for all $i$ and $j$ . The saliency of $e_{ij}$ is then defined to be $|\frac{\partial S(y)}{\partial {e_{ij}}}|$ .
The second row of Fig. 1 presents the attention saliency map for the three examples acquired by the same ESIM-50 model. Interestingly, the saliencies are clearly different across the examples, each highlighting different parts of the alignment. Specifically, for h1, we see the alignment between “is playing” and “taking a nap” and the alignment of “in a garden” to have the most prominent saliency toward the decision of Contradiction. For h2, the alignment of “kid” and “her family” seems to be the most salient for the decision of Neutral. Finally, for h3, the alignment between “is having fun” and “kid is playing” have the strongest impact toward the decision of Entailment.
From this example, we can see that by inspecting the attention saliency, we effectively pinpoint which part of the alignments contribute most critically to the final prediction whereas simply visualizing the attention itself reveals little information.
In the previous examples, we study the behavior of the same model on different inputs. Now we use the attention saliency to compare the two different ESIM models: ESIM-50 and ESIM-300.
Consider two examples with a shared hypothesis of “A man ordered a book” and premise:
John ordered a book from amazon
Mary ordered a book from amazon
Here ESIM-50 fails to capture the gender connections of the two different names and predicts Neutral for both inputs, whereas ESIM-300 correctly predicts Entailment for the first case and Contradiction for the second.
In the first two columns of Fig. 2 (column a and b) we visualize the attention of the two examples for ESIM-50 (left) and ESIM-300 (right) respectively. Although the two models make different predictions, their attention maps appear qualitatively similar.
In contrast, columns 3-4 of Fig. 2 (column c and d) present the attention saliency for the two examples by ESIM-50 and ESIM-300 respectively. We see that for both examples, ESIM-50 primarily focused on the alignment of “ordered”, whereas ESIM-300 focused more on the alignment of “John” and “Mary” with “man”. It is interesting to note that ESIM-300 does not appear to learn significantly different similarity values compared to ESIM-50 for the two critical pairs of words (“John”, “man”) and (“Mary”, “man”) based on the attention map. The saliency map, however, reveals that the two models use these values quite differently, with only ESIM-300 correctly focusing on them.
LSTM Gating Signals
LSTM gating signals determine the flow of information. In other words, they indicate how LSTM reads the word sequences and how the information from different parts is captured and combined. LSTM gating signals are rarely analyzed, possibly due to their high dimensionality and complexity. In this work, we consider both the gating signals and their saliency, which is computed as the partial derivative of the score of the final decision with respect to each gating signal.
Instead of considering individual dimensions of the gating signals, we aggregate them to consider their norm, both for the signal and for its saliency. Note that ESIM models have two LSTM layers, the first (input) LSTM performs the input encoding and the second (inference) LSTM generates the representation for inference.
In Fig. 3 we plot the normalized signal and saliency norms for different gates (input, forget, output) of the Forward input (bottom three rows) and inference (top three rows) LSTMs. These results are produced by the ESIM-50 model for the three examples of Section 3.1, one for each column.
From the figure, we first note that the saliency tends to be somewhat consistent across different gates within the same LSTM, suggesting that we can interpret them jointly to identify parts of the sentence important for the model's prediction.
Comparing across examples, we see that the saliency curves show pronounced differences across the examples. For instance, the saliency pattern of the Neutral example is significantly different from the other two examples, and heavily concentrated toward the end of the sentence (“with her family”). Note that without this part of the sentence, the relationship would have been Entailment. The focus (evidenced by its strong saliency and strong gating signal) on this particular part, which presents information not available from the premise, explains the model's decision of Neutral.
Comparing the behavior of the input LSTM and the inference LSTM, we observe interesting shifts of focus. In particular, we see that the inference LSTM tends to see much more concentrated saliency over key parts of the sentence, whereas the input LSTM sees more spread of saliency. For example, for the Contradiction example, the input LSTM sees high saliency for both “taking” and “in”, whereas the inference LSTM primarily focuses on “nap”, which is the key word suggesting a Contradiction. Note that ESIM uses attention between the input and inference LSTM layers to align/contrast the sentences, hence it makes sense that the inference LSTM is more focused on the critical differences between the sentences. This is also observed for the Neutral example as well.
It is worth noting that, while revealing similar general trends, the backward LSTM can sometimes focus on different parts of the sentence (e.g., see Fig. 11 of Appendix), suggesting the forward and backward readings provide complementary understanding of the sentence.
Conclusion
We propose new visualization and interpretation strategies for neural models to understand how and why they work. We demonstrate the effectiveness of the proposed strategies on a complex task (NLI). Our strategies are able to provide interesting insights not achievable by previous explanation techniques. Our future work will extend our study to consider other NLP tasks and models with the goal of producing useful insights for further improving these models. Model In this section we describe the ESIM model. We divide ESIM to three main parts: 1) input encoding, 2) attention, and 3) inference. Figure 4 demonstrates a high-level view of the ESIM framework. Let $u=[u_1, \cdots , u_n]$ and $v=[v_1, \cdots , v_m]$ be the given premise with length $n$ and hypothesis with length $m$ respectively, where $u_i, v_j \in \mathbb {R}^r$ are word embeddings of $r$ -dimensional vector. The goal is to predict a label $y$ that indicates the logical relationship between premise $u$ and hypothesis $v$ . Below we briefly explain the aforementioned parts. Input Encoding It utilizes a bidirectional LSTM (BiLSTM) for encoding the given premise and hypothesis using Equations 16 and 17 respectively.
$$\hat{u} \in \mathbb {R}^{n \times 2d}$$ (Eq. )
$$\hat{v} \in \mathbb {R}^{m \times 2d}$$ (Eq. ) where $u$ and $v=[v_1, \cdots , v_m]$0 are the reading sequences of $v=[v_1, \cdots , v_m]$1 and $v=[v_1, \cdots , v_m]$2 respectively. Attention It employs a soft alignment method to associate the relevant sub-components between the given premise and hypothesis. Equation 19 (energy function) computes the unnormalized attention weights as the similarity of hidden states of the premise and hypothesis.
$$u$$ (Eq. ) where $v=[v_1, \cdots , v_m]$3 and $v=[v_1, \cdots , v_m]$4 are the hidden representations of $v=[v_1, \cdots , v_m]$5 and $v=[v_1, \cdots , v_m]$6 respectively which are computed earlier in Equations 16 and 17 . Next, for each word in either premise or hypothesis, the relevant semantics in the other sentence is extracted and composed according to $v=[v_1, \cdots , v_m]$7 . Equations 20 and 21 provide formal and specific details of this procedure.
$$\tilde{v}_j$$ (Eq. )
$$\hat{u}$$ (Eq. ) where $v=[v_1, \cdots , v_m]$8 represents the extracted relevant information of $v=[v_1, \cdots , v_m]$9 by attending to $n$0 while $n$1 represents the extracted relevant information of $n$2 by attending to $n$3 . Next, it passes the enriched information through a projector layer which produce the final output of attention stage. Equations 22 and 23 formally represent this process.
$$p$$ (Eq. )
$$q$$ (Eq. ) Here $n$4 stands for element-wise product while $n$5 and $n$6 are the trainable weights and biases of the projector layer respectively. $n$7 and $n$8 indicate the output of attention devision for premise and hypothesis respectively. Inference During this phase, it uses another BiLSTM to aggregate the two sequences of computed matching vectors, $n$9 and $m$0 from the attention stage (Equations 27 and 28 ).
$$\emph {softmax}$$ (Eq. )
$$\hat{u} = \textit {BiLSTM}(u)$$ (Eq. 16) where $m$1 and $m$2 are the reading sequences of $m$3 and $m$4 respectively. Finally the concatenation max and average pooling of $m$5 and $m$6 are pass through a multilayer perceptron (MLP) classifier that includes a hidden layer with $m$7 activation and $m$8 output layer. The model is trained in an end-to-end manner. Attention Study Here we provide more examples on the NLI task which intend to examine specific behavior in this model. Such examples indicate interesting observation that we can analyze them in the future works. Table 1 shows the list of all example. LSTM Gating Signal Finally, Figure 11 depicts the backward LSTM gating signals study. | Unanswerable |
f7070b2e258beac9b09514be2bfcc5a528cc3a0e | f7070b2e258beac9b09514be2bfcc5a528cc3a0e_1 | Q: What is the performance of their model?
Text: Introduction
Deep learning has achieved tremendous success for many NLP tasks. However, unlike traditional methods that provide optimized weights for human understandable features, the behavior of deep learning models is much harder to interpret. Due to the high dimensionality of word embeddings, and the complex, typically recurrent architectures used for textual data, it is often unclear how and why a deep learning model reaches its decisions.
There are a few attempts toward explaining/interpreting deep learning-based models, mostly by visualizing the representation of words and/or hidden states, and their importances (via saliency or erasure) on shallow tasks like sentiment analysis and POS tagging BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . In contrast, we focus on interpreting the gating and attention signals of the intermediate layers of deep models in the challenging task of Natural Language Inference. A key concept in explaining deep models is saliency, which determines what is critical for the final decision of a deep model. So far, saliency has only been used to illustrate the impact of word embeddings. In this paper, we extend this concept to the intermediate layer of deep models to examine the saliency of attention as well as the LSTM gating signals to understand the behavior of these components and their impact on the final decision.
We make two main contributions. First, we introduce new strategies for interpreting the behavior of deep models in their intermediate layers, specifically, by examining the saliency of the attention and the gating signals. Second, we provide an extensive analysis of the state-of-the-art model for the NLI task and show that our methods reveal interesting insights not available from traditional methods of inspecting attention and word saliency.
In this paper, our focus was on NLI, which is a fundamental NLP task that requires both understanding and reasoning. Furthermore, the state-of-the-art NLI models employ complex neural architectures involving key mechanisms, such as attention and repeated reading, widely seen in successful models for other NLP tasks. As such, we expect our methods to be potentially useful for other natural understanding tasks as well.
Task and Model
In NLI BIBREF4 , we are given two sentences, a premise and a hypothesis, the goal is to decide the logical relationship (Entailment, Neutral, or Contradiction) between them.
Many of the top performing NLI models BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , are variants of the ESIM model BIBREF11 , which we choose to analyze in this paper. ESIM reads the sentences independently using LSTM at first, and then applies attention to align/contrast the sentences. Another round of LSTM reading then produces the final representations, which are compared to make the prediction. Detailed description of ESIM can be found in the Appendix.
Using the SNLI BIBREF4 data, we train two variants of ESIM, with dimensionality 50 and 300 respectively, referred to as ESIM-50 and ESIM-300 in the remainder of the paper.
Visualization of Attention and Gating
In this work, we are primarily interested in the internal workings of the NLI model. In particular, we focus on the attention and the gating signals of LSTM readers, and how they contribute to the decisions of the model.
Attention
Attention has been widely used in many NLP tasks BIBREF12 , BIBREF13 , BIBREF14 and is probably one of the most critical parts that affects the inference decisions. Several pieces of prior work in NLI have attempted to visualize the attention layer to provide some understanding of their models BIBREF5 , BIBREF15 . Such visualizations generate a heatmap representing the similarity between the hidden states of the premise and the hypothesis (Eq. 19 of Appendix). Unfortunately the similarities are often the same regardless of the decision.
Let us consider the following example, where the same premise “A kid is playing in the garden”, is paired with three different hypotheses:
A kid is taking a nap in the garden
A kid is having fun in the garden with her family
A kid is having fun in the garden
Note that the ground truth relationships are Contradiction, Neutral, and Entailment, respectively.
The first row of Fig. 1 shows the visualization of normalized attention for the three cases produced by ESIM-50, which makes correct predictions for all of them. As we can see from the figure, the three attention maps are fairly similar despite the completely different decisions. The key issue is that the attention visualization only allows us to see how the model aligns the premise with the hypothesis, but does not show how such alignment impacts the decision. This prompts us to consider the saliency of attention.
The concept of saliency was first introduced in vision for visualizing the spatial support on an image for a particular object class BIBREF16 . In NLP, saliency has been used to study the importance of words toward a final decision BIBREF0 .
We propose to examine the saliency of attention. Specifically, given a premise-hypothesis pair and the model's decision $y$ , we consider the similarity between a pair of premise and hypothesis hidden states $e_{ij}$ as a variable. The score of the decision $S(y)$ is thus a function of $e_{ij}$ for all $i$ and $j$ . The saliency of $e_{ij}$ is then defined to be $|\frac{\partial S(y)}{\partial {e_{ij}}}|$ .
The second row of Fig. 1 presents the attention saliency map for the three examples acquired by the same ESIM-50 model. Interestingly, the saliencies are clearly different across the examples, each highlighting different parts of the alignment. Specifically, for h1, we see the alignment between “is playing” and “taking a nap” and the alignment of “in a garden” to have the most prominent saliency toward the decision of Contradiction. For h2, the alignment of “kid” and “her family” seems to be the most salient for the decision of Neutral. Finally, for h3, the alignment between “is having fun” and “kid is playing” have the strongest impact toward the decision of Entailment.
From this example, we can see that by inspecting the attention saliency, we effectively pinpoint which part of the alignments contribute most critically to the final prediction whereas simply visualizing the attention itself reveals little information.
In the previous examples, we study the behavior of the same model on different inputs. Now we use the attention saliency to compare the two different ESIM models: ESIM-50 and ESIM-300.
Consider two examples with a shared hypothesis of “A man ordered a book” and premise:
John ordered a book from amazon
Mary ordered a book from amazon
Here ESIM-50 fails to capture the gender connections of the two different names and predicts Neutral for both inputs, whereas ESIM-300 correctly predicts Entailment for the first case and Contradiction for the second.
In the first two columns of Fig. 2 (column a and b) we visualize the attention of the two examples for ESIM-50 (left) and ESIM-300 (right) respectively. Although the two models make different predictions, their attention maps appear qualitatively similar.
In contrast, columns 3-4 of Fig. 2 (column c and d) present the attention saliency for the two examples by ESIM-50 and ESIM-300 respectively. We see that for both examples, ESIM-50 primarily focused on the alignment of “ordered”, whereas ESIM-300 focused more on the alignment of “John” and “Mary” with “man”. It is interesting to note that ESIM-300 does not appear to learn significantly different similarity values compared to ESIM-50 for the two critical pairs of words (“John”, “man”) and (“Mary”, “man”) based on the attention map. The saliency map, however, reveals that the two models use these values quite differently, with only ESIM-300 correctly focusing on them.
LSTM Gating Signals
LSTM gating signals determine the flow of information. In other words, they indicate how LSTM reads the word sequences and how the information from different parts is captured and combined. LSTM gating signals are rarely analyzed, possibly due to their high dimensionality and complexity. In this work, we consider both the gating signals and their saliency, which is computed as the partial derivative of the score of the final decision with respect to each gating signal.
Instead of considering individual dimensions of the gating signals, we aggregate them to consider their norm, both for the signal and for its saliency. Note that ESIM models have two LSTM layers, the first (input) LSTM performs the input encoding and the second (inference) LSTM generates the representation for inference.
In Fig. 3 we plot the normalized signal and saliency norms for different gates (input, forget, output) of the Forward input (bottom three rows) and inference (top three rows) LSTMs. These results are produced by the ESIM-50 model for the three examples of Section 3.1, one for each column.
From the figure, we first note that the saliency tends to be somewhat consistent across different gates within the same LSTM, suggesting that we can interpret them jointly to identify parts of the sentence important for the model's prediction.
Comparing across examples, we see that the saliency curves show pronounced differences across the examples. For instance, the saliency pattern of the Neutral example is significantly different from the other two examples, and heavily concentrated toward the end of the sentence (“with her family”). Note that without this part of the sentence, the relationship would have been Entailment. The focus (evidenced by its strong saliency and strong gating signal) on this particular part, which presents information not available from the premise, explains the model's decision of Neutral.
Comparing the behavior of the input LSTM and the inference LSTM, we observe interesting shifts of focus. In particular, we see that the inference LSTM tends to see much more concentrated saliency over key parts of the sentence, whereas the input LSTM sees more spread of saliency. For example, for the Contradiction example, the input LSTM sees high saliency for both “taking” and “in”, whereas the inference LSTM primarily focuses on “nap”, which is the key word suggesting a Contradiction. Note that ESIM uses attention between the input and inference LSTM layers to align/contrast the sentences, hence it makes sense that the inference LSTM is more focused on the critical differences between the sentences. This is also observed for the Neutral example as well.
It is worth noting that, while revealing similar general trends, the backward LSTM can sometimes focus on different parts of the sentence (e.g., see Fig. 11 of Appendix), suggesting the forward and backward readings provide complementary understanding of the sentence.
Conclusion
We propose new visualization and interpretation strategies for neural models to understand how and why they work. We demonstrate the effectiveness of the proposed strategies on a complex task (NLI). Our strategies are able to provide interesting insights not achievable by previous explanation techniques. Our future work will extend our study to consider other NLP tasks and models with the goal of producing useful insights for further improving these models. Model In this section we describe the ESIM model. We divide ESIM to three main parts: 1) input encoding, 2) attention, and 3) inference. Figure 4 demonstrates a high-level view of the ESIM framework. Let $u=[u_1, \cdots , u_n]$ and $v=[v_1, \cdots , v_m]$ be the given premise with length $n$ and hypothesis with length $m$ respectively, where $u_i, v_j \in \mathbb {R}^r$ are word embeddings of $r$ -dimensional vector. The goal is to predict a label $y$ that indicates the logical relationship between premise $u$ and hypothesis $v$ . Below we briefly explain the aforementioned parts. Input Encoding It utilizes a bidirectional LSTM (BiLSTM) for encoding the given premise and hypothesis using Equations 16 and 17 respectively.
$$\hat{u} \in \mathbb {R}^{n \times 2d}$$ (Eq. )
$$\hat{v} \in \mathbb {R}^{m \times 2d}$$ (Eq. ) where $u$ and $v=[v_1, \cdots , v_m]$0 are the reading sequences of $v=[v_1, \cdots , v_m]$1 and $v=[v_1, \cdots , v_m]$2 respectively. Attention It employs a soft alignment method to associate the relevant sub-components between the given premise and hypothesis. Equation 19 (energy function) computes the unnormalized attention weights as the similarity of hidden states of the premise and hypothesis.
$$u$$ (Eq. ) where $v=[v_1, \cdots , v_m]$3 and $v=[v_1, \cdots , v_m]$4 are the hidden representations of $v=[v_1, \cdots , v_m]$5 and $v=[v_1, \cdots , v_m]$6 respectively which are computed earlier in Equations 16 and 17 . Next, for each word in either premise or hypothesis, the relevant semantics in the other sentence is extracted and composed according to $v=[v_1, \cdots , v_m]$7 . Equations 20 and 21 provide formal and specific details of this procedure.
$$\tilde{v}_j$$ (Eq. )
$$\hat{u}$$ (Eq. ) where $v=[v_1, \cdots , v_m]$8 represents the extracted relevant information of $v=[v_1, \cdots , v_m]$9 by attending to $n$0 while $n$1 represents the extracted relevant information of $n$2 by attending to $n$3 . Next, it passes the enriched information through a projector layer which produce the final output of attention stage. Equations 22 and 23 formally represent this process.
$$p$$ (Eq. )
$$q$$ (Eq. ) Here $n$4 stands for element-wise product while $n$5 and $n$6 are the trainable weights and biases of the projector layer respectively. $n$7 and $n$8 indicate the output of attention devision for premise and hypothesis respectively. Inference During this phase, it uses another BiLSTM to aggregate the two sequences of computed matching vectors, $n$9 and $m$0 from the attention stage (Equations 27 and 28 ).
$$\emph {softmax}$$ (Eq. )
$$\hat{u} = \textit {BiLSTM}(u)$$ (Eq. 16) where $m$1 and $m$2 are the reading sequences of $m$3 and $m$4 respectively. Finally the concatenation max and average pooling of $m$5 and $m$6 are pass through a multilayer perceptron (MLP) classifier that includes a hidden layer with $m$7 activation and $m$8 output layer. The model is trained in an end-to-end manner. Attention Study Here we provide more examples on the NLI task which intend to examine specific behavior in this model. Such examples indicate interesting observation that we can analyze them in the future works. Table 1 shows the list of all example. LSTM Gating Signal Finally, Figure 11 depicts the backward LSTM gating signals study. | Unanswerable |
2efdcebebeb970021233553104553205ce5d6567 | 2efdcebebeb970021233553104553205ce5d6567_0 | Q: How many layers are there in their model?
Text: Introduction
Deep learning has achieved tremendous success for many NLP tasks. However, unlike traditional methods that provide optimized weights for human understandable features, the behavior of deep learning models is much harder to interpret. Due to the high dimensionality of word embeddings, and the complex, typically recurrent architectures used for textual data, it is often unclear how and why a deep learning model reaches its decisions.
There are a few attempts toward explaining/interpreting deep learning-based models, mostly by visualizing the representation of words and/or hidden states, and their importances (via saliency or erasure) on shallow tasks like sentiment analysis and POS tagging BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . In contrast, we focus on interpreting the gating and attention signals of the intermediate layers of deep models in the challenging task of Natural Language Inference. A key concept in explaining deep models is saliency, which determines what is critical for the final decision of a deep model. So far, saliency has only been used to illustrate the impact of word embeddings. In this paper, we extend this concept to the intermediate layer of deep models to examine the saliency of attention as well as the LSTM gating signals to understand the behavior of these components and their impact on the final decision.
We make two main contributions. First, we introduce new strategies for interpreting the behavior of deep models in their intermediate layers, specifically, by examining the saliency of the attention and the gating signals. Second, we provide an extensive analysis of the state-of-the-art model for the NLI task and show that our methods reveal interesting insights not available from traditional methods of inspecting attention and word saliency.
In this paper, our focus was on NLI, which is a fundamental NLP task that requires both understanding and reasoning. Furthermore, the state-of-the-art NLI models employ complex neural architectures involving key mechanisms, such as attention and repeated reading, widely seen in successful models for other NLP tasks. As such, we expect our methods to be potentially useful for other natural understanding tasks as well.
Task and Model
In NLI BIBREF4 , we are given two sentences, a premise and a hypothesis, the goal is to decide the logical relationship (Entailment, Neutral, or Contradiction) between them.
Many of the top performing NLI models BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , are variants of the ESIM model BIBREF11 , which we choose to analyze in this paper. ESIM reads the sentences independently using LSTM at first, and then applies attention to align/contrast the sentences. Another round of LSTM reading then produces the final representations, which are compared to make the prediction. Detailed description of ESIM can be found in the Appendix.
Using the SNLI BIBREF4 data, we train two variants of ESIM, with dimensionality 50 and 300 respectively, referred to as ESIM-50 and ESIM-300 in the remainder of the paper.
Visualization of Attention and Gating
In this work, we are primarily interested in the internal workings of the NLI model. In particular, we focus on the attention and the gating signals of LSTM readers, and how they contribute to the decisions of the model.
Attention
Attention has been widely used in many NLP tasks BIBREF12 , BIBREF13 , BIBREF14 and is probably one of the most critical parts that affects the inference decisions. Several pieces of prior work in NLI have attempted to visualize the attention layer to provide some understanding of their models BIBREF5 , BIBREF15 . Such visualizations generate a heatmap representing the similarity between the hidden states of the premise and the hypothesis (Eq. 19 of Appendix). Unfortunately the similarities are often the same regardless of the decision.
Let us consider the following example, where the same premise “A kid is playing in the garden”, is paired with three different hypotheses:
A kid is taking a nap in the garden
A kid is having fun in the garden with her family
A kid is having fun in the garden
Note that the ground truth relationships are Contradiction, Neutral, and Entailment, respectively.
The first row of Fig. 1 shows the visualization of normalized attention for the three cases produced by ESIM-50, which makes correct predictions for all of them. As we can see from the figure, the three attention maps are fairly similar despite the completely different decisions. The key issue is that the attention visualization only allows us to see how the model aligns the premise with the hypothesis, but does not show how such alignment impacts the decision. This prompts us to consider the saliency of attention.
The concept of saliency was first introduced in vision for visualizing the spatial support on an image for a particular object class BIBREF16 . In NLP, saliency has been used to study the importance of words toward a final decision BIBREF0 .
We propose to examine the saliency of attention. Specifically, given a premise-hypothesis pair and the model's decision $y$ , we consider the similarity between a pair of premise and hypothesis hidden states $e_{ij}$ as a variable. The score of the decision $S(y)$ is thus a function of $e_{ij}$ for all $i$ and $j$ . The saliency of $e_{ij}$ is then defined to be $|\frac{\partial S(y)}{\partial {e_{ij}}}|$ .
The second row of Fig. 1 presents the attention saliency map for the three examples acquired by the same ESIM-50 model. Interestingly, the saliencies are clearly different across the examples, each highlighting different parts of the alignment. Specifically, for h1, we see the alignment between “is playing” and “taking a nap” and the alignment of “in a garden” to have the most prominent saliency toward the decision of Contradiction. For h2, the alignment of “kid” and “her family” seems to be the most salient for the decision of Neutral. Finally, for h3, the alignment between “is having fun” and “kid is playing” have the strongest impact toward the decision of Entailment.
From this example, we can see that by inspecting the attention saliency, we effectively pinpoint which part of the alignments contribute most critically to the final prediction whereas simply visualizing the attention itself reveals little information.
In the previous examples, we study the behavior of the same model on different inputs. Now we use the attention saliency to compare the two different ESIM models: ESIM-50 and ESIM-300.
Consider two examples with a shared hypothesis of “A man ordered a book” and premise:
John ordered a book from amazon
Mary ordered a book from amazon
Here ESIM-50 fails to capture the gender connections of the two different names and predicts Neutral for both inputs, whereas ESIM-300 correctly predicts Entailment for the first case and Contradiction for the second.
In the first two columns of Fig. 2 (column a and b) we visualize the attention of the two examples for ESIM-50 (left) and ESIM-300 (right) respectively. Although the two models make different predictions, their attention maps appear qualitatively similar.
In contrast, columns 3-4 of Fig. 2 (column c and d) present the attention saliency for the two examples by ESIM-50 and ESIM-300 respectively. We see that for both examples, ESIM-50 primarily focused on the alignment of “ordered”, whereas ESIM-300 focused more on the alignment of “John” and “Mary” with “man”. It is interesting to note that ESIM-300 does not appear to learn significantly different similarity values compared to ESIM-50 for the two critical pairs of words (“John”, “man”) and (“Mary”, “man”) based on the attention map. The saliency map, however, reveals that the two models use these values quite differently, with only ESIM-300 correctly focusing on them.
LSTM Gating Signals
LSTM gating signals determine the flow of information. In other words, they indicate how LSTM reads the word sequences and how the information from different parts is captured and combined. LSTM gating signals are rarely analyzed, possibly due to their high dimensionality and complexity. In this work, we consider both the gating signals and their saliency, which is computed as the partial derivative of the score of the final decision with respect to each gating signal.
Instead of considering individual dimensions of the gating signals, we aggregate them to consider their norm, both for the signal and for its saliency. Note that ESIM models have two LSTM layers, the first (input) LSTM performs the input encoding and the second (inference) LSTM generates the representation for inference.
In Fig. 3 we plot the normalized signal and saliency norms for different gates (input, forget, output) of the Forward input (bottom three rows) and inference (top three rows) LSTMs. These results are produced by the ESIM-50 model for the three examples of Section 3.1, one for each column.
From the figure, we first note that the saliency tends to be somewhat consistent across different gates within the same LSTM, suggesting that we can interpret them jointly to identify parts of the sentence important for the model's prediction.
Comparing across examples, we see that the saliency curves show pronounced differences across the examples. For instance, the saliency pattern of the Neutral example is significantly different from the other two examples, and heavily concentrated toward the end of the sentence (“with her family”). Note that without this part of the sentence, the relationship would have been Entailment. The focus (evidenced by its strong saliency and strong gating signal) on this particular part, which presents information not available from the premise, explains the model's decision of Neutral.
Comparing the behavior of the input LSTM and the inference LSTM, we observe interesting shifts of focus. In particular, we see that the inference LSTM tends to see much more concentrated saliency over key parts of the sentence, whereas the input LSTM sees more spread of saliency. For example, for the Contradiction example, the input LSTM sees high saliency for both “taking” and “in”, whereas the inference LSTM primarily focuses on “nap”, which is the key word suggesting a Contradiction. Note that ESIM uses attention between the input and inference LSTM layers to align/contrast the sentences, hence it makes sense that the inference LSTM is more focused on the critical differences between the sentences. This is also observed for the Neutral example as well.
It is worth noting that, while revealing similar general trends, the backward LSTM can sometimes focus on different parts of the sentence (e.g., see Fig. 11 of Appendix), suggesting the forward and backward readings provide complementary understanding of the sentence.
Conclusion
We propose new visualization and interpretation strategies for neural models to understand how and why they work. We demonstrate the effectiveness of the proposed strategies on a complex task (NLI). Our strategies are able to provide interesting insights not achievable by previous explanation techniques. Our future work will extend our study to consider other NLP tasks and models with the goal of producing useful insights for further improving these models. Model In this section we describe the ESIM model. We divide ESIM to three main parts: 1) input encoding, 2) attention, and 3) inference. Figure 4 demonstrates a high-level view of the ESIM framework. Let $u=[u_1, \cdots , u_n]$ and $v=[v_1, \cdots , v_m]$ be the given premise with length $n$ and hypothesis with length $m$ respectively, where $u_i, v_j \in \mathbb {R}^r$ are word embeddings of $r$ -dimensional vector. The goal is to predict a label $y$ that indicates the logical relationship between premise $u$ and hypothesis $v$ . Below we briefly explain the aforementioned parts. Input Encoding It utilizes a bidirectional LSTM (BiLSTM) for encoding the given premise and hypothesis using Equations 16 and 17 respectively.
$$\hat{u} \in \mathbb {R}^{n \times 2d}$$ (Eq. )
$$\hat{v} \in \mathbb {R}^{m \times 2d}$$ (Eq. ) where $u$ and $v=[v_1, \cdots , v_m]$0 are the reading sequences of $v=[v_1, \cdots , v_m]$1 and $v=[v_1, \cdots , v_m]$2 respectively. Attention It employs a soft alignment method to associate the relevant sub-components between the given premise and hypothesis. Equation 19 (energy function) computes the unnormalized attention weights as the similarity of hidden states of the premise and hypothesis.
$$u$$ (Eq. ) where $v=[v_1, \cdots , v_m]$3 and $v=[v_1, \cdots , v_m]$4 are the hidden representations of $v=[v_1, \cdots , v_m]$5 and $v=[v_1, \cdots , v_m]$6 respectively which are computed earlier in Equations 16 and 17 . Next, for each word in either premise or hypothesis, the relevant semantics in the other sentence is extracted and composed according to $v=[v_1, \cdots , v_m]$7 . Equations 20 and 21 provide formal and specific details of this procedure.
$$\tilde{v}_j$$ (Eq. )
$$\hat{u}$$ (Eq. ) where $v=[v_1, \cdots , v_m]$8 represents the extracted relevant information of $v=[v_1, \cdots , v_m]$9 by attending to $n$0 while $n$1 represents the extracted relevant information of $n$2 by attending to $n$3 . Next, it passes the enriched information through a projector layer which produce the final output of attention stage. Equations 22 and 23 formally represent this process.
$$p$$ (Eq. )
$$q$$ (Eq. ) Here $n$4 stands for element-wise product while $n$5 and $n$6 are the trainable weights and biases of the projector layer respectively. $n$7 and $n$8 indicate the output of attention devision for premise and hypothesis respectively. Inference During this phase, it uses another BiLSTM to aggregate the two sequences of computed matching vectors, $n$9 and $m$0 from the attention stage (Equations 27 and 28 ).
$$\emph {softmax}$$ (Eq. )
$$\hat{u} = \textit {BiLSTM}(u)$$ (Eq. 16) where $m$1 and $m$2 are the reading sequences of $m$3 and $m$4 respectively. Finally the concatenation max and average pooling of $m$5 and $m$6 are pass through a multilayer perceptron (MLP) classifier that includes a hidden layer with $m$7 activation and $m$8 output layer. The model is trained in an end-to-end manner. Attention Study Here we provide more examples on the NLI task which intend to examine specific behavior in this model. Such examples indicate interesting observation that we can analyze them in the future works. Table 1 shows the list of all example. LSTM Gating Signal Finally, Figure 11 depicts the backward LSTM gating signals study. | two LSTM layers |
4fa851d91388f0803e33f6cfae519548598cd37c | 4fa851d91388f0803e33f6cfae519548598cd37c_0 | Q: Did they compare with gradient-based methods?
Text: Introduction
Deep learning has achieved tremendous success for many NLP tasks. However, unlike traditional methods that provide optimized weights for human understandable features, the behavior of deep learning models is much harder to interpret. Due to the high dimensionality of word embeddings, and the complex, typically recurrent architectures used for textual data, it is often unclear how and why a deep learning model reaches its decisions.
There are a few attempts toward explaining/interpreting deep learning-based models, mostly by visualizing the representation of words and/or hidden states, and their importances (via saliency or erasure) on shallow tasks like sentiment analysis and POS tagging BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . In contrast, we focus on interpreting the gating and attention signals of the intermediate layers of deep models in the challenging task of Natural Language Inference. A key concept in explaining deep models is saliency, which determines what is critical for the final decision of a deep model. So far, saliency has only been used to illustrate the impact of word embeddings. In this paper, we extend this concept to the intermediate layer of deep models to examine the saliency of attention as well as the LSTM gating signals to understand the behavior of these components and their impact on the final decision.
We make two main contributions. First, we introduce new strategies for interpreting the behavior of deep models in their intermediate layers, specifically, by examining the saliency of the attention and the gating signals. Second, we provide an extensive analysis of the state-of-the-art model for the NLI task and show that our methods reveal interesting insights not available from traditional methods of inspecting attention and word saliency.
In this paper, our focus was on NLI, which is a fundamental NLP task that requires both understanding and reasoning. Furthermore, the state-of-the-art NLI models employ complex neural architectures involving key mechanisms, such as attention and repeated reading, widely seen in successful models for other NLP tasks. As such, we expect our methods to be potentially useful for other natural understanding tasks as well.
Task and Model
In NLI BIBREF4 , we are given two sentences, a premise and a hypothesis, the goal is to decide the logical relationship (Entailment, Neutral, or Contradiction) between them.
Many of the top performing NLI models BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , are variants of the ESIM model BIBREF11 , which we choose to analyze in this paper. ESIM reads the sentences independently using LSTM at first, and then applies attention to align/contrast the sentences. Another round of LSTM reading then produces the final representations, which are compared to make the prediction. Detailed description of ESIM can be found in the Appendix.
Using the SNLI BIBREF4 data, we train two variants of ESIM, with dimensionality 50 and 300 respectively, referred to as ESIM-50 and ESIM-300 in the remainder of the paper.
Visualization of Attention and Gating
In this work, we are primarily interested in the internal workings of the NLI model. In particular, we focus on the attention and the gating signals of LSTM readers, and how they contribute to the decisions of the model.
Attention
Attention has been widely used in many NLP tasks BIBREF12 , BIBREF13 , BIBREF14 and is probably one of the most critical parts that affects the inference decisions. Several pieces of prior work in NLI have attempted to visualize the attention layer to provide some understanding of their models BIBREF5 , BIBREF15 . Such visualizations generate a heatmap representing the similarity between the hidden states of the premise and the hypothesis (Eq. 19 of Appendix). Unfortunately the similarities are often the same regardless of the decision.
Let us consider the following example, where the same premise “A kid is playing in the garden”, is paired with three different hypotheses:
A kid is taking a nap in the garden
A kid is having fun in the garden with her family
A kid is having fun in the garden
Note that the ground truth relationships are Contradiction, Neutral, and Entailment, respectively.
The first row of Fig. 1 shows the visualization of normalized attention for the three cases produced by ESIM-50, which makes correct predictions for all of them. As we can see from the figure, the three attention maps are fairly similar despite the completely different decisions. The key issue is that the attention visualization only allows us to see how the model aligns the premise with the hypothesis, but does not show how such alignment impacts the decision. This prompts us to consider the saliency of attention.
The concept of saliency was first introduced in vision for visualizing the spatial support on an image for a particular object class BIBREF16 . In NLP, saliency has been used to study the importance of words toward a final decision BIBREF0 .
We propose to examine the saliency of attention. Specifically, given a premise-hypothesis pair and the model's decision $y$ , we consider the similarity between a pair of premise and hypothesis hidden states $e_{ij}$ as a variable. The score of the decision $S(y)$ is thus a function of $e_{ij}$ for all $i$ and $j$ . The saliency of $e_{ij}$ is then defined to be $|\frac{\partial S(y)}{\partial {e_{ij}}}|$ .
The second row of Fig. 1 presents the attention saliency map for the three examples acquired by the same ESIM-50 model. Interestingly, the saliencies are clearly different across the examples, each highlighting different parts of the alignment. Specifically, for h1, we see the alignment between “is playing” and “taking a nap” and the alignment of “in a garden” to have the most prominent saliency toward the decision of Contradiction. For h2, the alignment of “kid” and “her family” seems to be the most salient for the decision of Neutral. Finally, for h3, the alignment between “is having fun” and “kid is playing” have the strongest impact toward the decision of Entailment.
From this example, we can see that by inspecting the attention saliency, we effectively pinpoint which part of the alignments contribute most critically to the final prediction whereas simply visualizing the attention itself reveals little information.
In the previous examples, we study the behavior of the same model on different inputs. Now we use the attention saliency to compare the two different ESIM models: ESIM-50 and ESIM-300.
Consider two examples with a shared hypothesis of “A man ordered a book” and premise:
John ordered a book from amazon
Mary ordered a book from amazon
Here ESIM-50 fails to capture the gender connections of the two different names and predicts Neutral for both inputs, whereas ESIM-300 correctly predicts Entailment for the first case and Contradiction for the second.
In the first two columns of Fig. 2 (column a and b) we visualize the attention of the two examples for ESIM-50 (left) and ESIM-300 (right) respectively. Although the two models make different predictions, their attention maps appear qualitatively similar.
In contrast, columns 3-4 of Fig. 2 (column c and d) present the attention saliency for the two examples by ESIM-50 and ESIM-300 respectively. We see that for both examples, ESIM-50 primarily focused on the alignment of “ordered”, whereas ESIM-300 focused more on the alignment of “John” and “Mary” with “man”. It is interesting to note that ESIM-300 does not appear to learn significantly different similarity values compared to ESIM-50 for the two critical pairs of words (“John”, “man”) and (“Mary”, “man”) based on the attention map. The saliency map, however, reveals that the two models use these values quite differently, with only ESIM-300 correctly focusing on them.
LSTM Gating Signals
LSTM gating signals determine the flow of information. In other words, they indicate how LSTM reads the word sequences and how the information from different parts is captured and combined. LSTM gating signals are rarely analyzed, possibly due to their high dimensionality and complexity. In this work, we consider both the gating signals and their saliency, which is computed as the partial derivative of the score of the final decision with respect to each gating signal.
Instead of considering individual dimensions of the gating signals, we aggregate them to consider their norm, both for the signal and for its saliency. Note that ESIM models have two LSTM layers, the first (input) LSTM performs the input encoding and the second (inference) LSTM generates the representation for inference.
In Fig. 3 we plot the normalized signal and saliency norms for different gates (input, forget, output) of the Forward input (bottom three rows) and inference (top three rows) LSTMs. These results are produced by the ESIM-50 model for the three examples of Section 3.1, one for each column.
From the figure, we first note that the saliency tends to be somewhat consistent across different gates within the same LSTM, suggesting that we can interpret them jointly to identify parts of the sentence important for the model's prediction.
Comparing across examples, we see that the saliency curves show pronounced differences across the examples. For instance, the saliency pattern of the Neutral example is significantly different from the other two examples, and heavily concentrated toward the end of the sentence (“with her family”). Note that without this part of the sentence, the relationship would have been Entailment. The focus (evidenced by its strong saliency and strong gating signal) on this particular part, which presents information not available from the premise, explains the model's decision of Neutral.
Comparing the behavior of the input LSTM and the inference LSTM, we observe interesting shifts of focus. In particular, we see that the inference LSTM tends to see much more concentrated saliency over key parts of the sentence, whereas the input LSTM sees more spread of saliency. For example, for the Contradiction example, the input LSTM sees high saliency for both “taking” and “in”, whereas the inference LSTM primarily focuses on “nap”, which is the key word suggesting a Contradiction. Note that ESIM uses attention between the input and inference LSTM layers to align/contrast the sentences, hence it makes sense that the inference LSTM is more focused on the critical differences between the sentences. This is also observed for the Neutral example as well.
It is worth noting that, while revealing similar general trends, the backward LSTM can sometimes focus on different parts of the sentence (e.g., see Fig. 11 of Appendix), suggesting the forward and backward readings provide complementary understanding of the sentence.
Conclusion
We propose new visualization and interpretation strategies for neural models to understand how and why they work. We demonstrate the effectiveness of the proposed strategies on a complex task (NLI). Our strategies are able to provide interesting insights not achievable by previous explanation techniques. Our future work will extend our study to consider other NLP tasks and models with the goal of producing useful insights for further improving these models. Model In this section we describe the ESIM model. We divide ESIM to three main parts: 1) input encoding, 2) attention, and 3) inference. Figure 4 demonstrates a high-level view of the ESIM framework. Let $u=[u_1, \cdots , u_n]$ and $v=[v_1, \cdots , v_m]$ be the given premise with length $n$ and hypothesis with length $m$ respectively, where $u_i, v_j \in \mathbb {R}^r$ are word embeddings of $r$ -dimensional vector. The goal is to predict a label $y$ that indicates the logical relationship between premise $u$ and hypothesis $v$ . Below we briefly explain the aforementioned parts. Input Encoding It utilizes a bidirectional LSTM (BiLSTM) for encoding the given premise and hypothesis using Equations 16 and 17 respectively.
$$\hat{u} \in \mathbb {R}^{n \times 2d}$$ (Eq. )
$$\hat{v} \in \mathbb {R}^{m \times 2d}$$ (Eq. ) where $u$ and $v=[v_1, \cdots , v_m]$0 are the reading sequences of $v=[v_1, \cdots , v_m]$1 and $v=[v_1, \cdots , v_m]$2 respectively. Attention It employs a soft alignment method to associate the relevant sub-components between the given premise and hypothesis. Equation 19 (energy function) computes the unnormalized attention weights as the similarity of hidden states of the premise and hypothesis.
$$u$$ (Eq. ) where $v=[v_1, \cdots , v_m]$3 and $v=[v_1, \cdots , v_m]$4 are the hidden representations of $v=[v_1, \cdots , v_m]$5 and $v=[v_1, \cdots , v_m]$6 respectively which are computed earlier in Equations 16 and 17 . Next, for each word in either premise or hypothesis, the relevant semantics in the other sentence is extracted and composed according to $v=[v_1, \cdots , v_m]$7 . Equations 20 and 21 provide formal and specific details of this procedure.
$$\tilde{v}_j$$ (Eq. )
$$\hat{u}$$ (Eq. ) where $v=[v_1, \cdots , v_m]$8 represents the extracted relevant information of $v=[v_1, \cdots , v_m]$9 by attending to $n$0 while $n$1 represents the extracted relevant information of $n$2 by attending to $n$3 . Next, it passes the enriched information through a projector layer which produce the final output of attention stage. Equations 22 and 23 formally represent this process.
$$p$$ (Eq. )
$$q$$ (Eq. ) Here $n$4 stands for element-wise product while $n$5 and $n$6 are the trainable weights and biases of the projector layer respectively. $n$7 and $n$8 indicate the output of attention devision for premise and hypothesis respectively. Inference During this phase, it uses another BiLSTM to aggregate the two sequences of computed matching vectors, $n$9 and $m$0 from the attention stage (Equations 27 and 28 ).
$$\emph {softmax}$$ (Eq. )
$$\hat{u} = \textit {BiLSTM}(u)$$ (Eq. 16) where $m$1 and $m$2 are the reading sequences of $m$3 and $m$4 respectively. Finally the concatenation max and average pooling of $m$5 and $m$6 are pass through a multilayer perceptron (MLP) classifier that includes a hidden layer with $m$7 activation and $m$8 output layer. The model is trained in an end-to-end manner. Attention Study Here we provide more examples on the NLI task which intend to examine specific behavior in this model. Such examples indicate interesting observation that we can analyze them in the future works. Table 1 shows the list of all example. LSTM Gating Signal Finally, Figure 11 depicts the backward LSTM gating signals study. | Unanswerable |
a891039441e008f1fd0a227dbed003f76c140737 | a891039441e008f1fd0a227dbed003f76c140737_0 | Q: What MC abbreviate for?
Text: Introduction
Enabling computers to understand given documents and answer questions about their content has recently attracted intensive interest, including but not limited to the efforts as in BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Many specific problems such as machine comprehension and question answering often involve modeling such question-document pairs.
The recent availability of relatively large training datasets (see Section "Related Work" for more details) has made it more feasible to train and estimate rather complex models in an end-to-end fashion for these problems, in which a whole model is fit directly with given question-answer tuples and the resulting model has shown to be rather effective.
In this paper, we take a closer look at modeling questions in such an end-to-end neural network framework, since we regard question understanding is of importance for such problems. We first introduced syntactic information to help encode questions. We then viewed and modelled different types of questions and the information shared among them as an adaptation problem and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results on our competitive baselines.
Related Work
Recent advance on reading comprehension and question answering has been closely associated with the availability of various datasets. BIBREF0 released the MCTest data consisting of 500 short, fictional open-domain stories and 2000 questions. The CNN/Daily Mail dataset BIBREF1 contains news articles for close style machine comprehension, in which only entities are removed and tested for comprehension. Children's Book Test (CBT) BIBREF2 leverages named entities, common nouns, verbs, and prepositions to test reading comprehension. The Stanford Question Answering Dataset (SQuAD) BIBREF3 is more recently released dataset, which consists of more than 100,000 questions for documents taken from Wikipedia across a wide range of topics. The question-answer pairs are annotated through crowdsourcing. Answers are spans of text marked in the original documents. In this paper, we use SQuAD to evaluate our models.
Many neural network models have been studied on the SQuAD task. BIBREF6 proposed match LSTM to associate documents and questions and adapted the so-called pointer Network BIBREF7 to determine the positions of the answer text spans. BIBREF8 proposed a dynamic chunk reader to extract and rank a set of answer candidates. BIBREF9 focused on word representation and presented a fine-grained gating mechanism to dynamically combine word-level and character-level representations based on the properties of words. BIBREF10 proposed a multi-perspective context matching (MPCM) model, which matched an encoded document and question from multiple perspectives. BIBREF11 proposed a dynamic decoder and so-called highway maxout network to improve the effectiveness of the decoder. The bi-directional attention flow (BIDAF) BIBREF12 used the bi-directional attention to obtain a question-aware context representation.
In this paper, we introduce syntactic information to encode questions with a specific form of recursive neural networks BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . More specifically, we explore a tree-structured LSTM BIBREF13 , BIBREF14 which extends the linear-chain long short-term memory (LSTM) BIBREF17 to a recursive structure, which has the potential to capture long-distance interactions over the structures.
Different types of questions are often used to seek for different types of information. For example, a "what" question could have very different property from that of a "why" question, while they may share information and need to be trained together instead of separately. We view this as a "adaptation" problem to let different types of questions share a basic model but still discriminate them when needed. Specifically, we are motivated by the ideas "i-vector" BIBREF18 in speech recognition, where neural network based adaptation is performed among different (groups) of speakers and we focused instead on different types of questions here.
The Baseline Model
Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction. Below we discuss these components in more details.
We concatenate embedding at two levels to represent a word: the character composition and word-level embedding. The character composition feeds all characters of a word into a convolutional neural network (CNN) BIBREF19 to obtain a representation for the word. And we use the pre-trained 300-D GloVe vectors BIBREF20 (see the experiment section for details) to initialize our word-level embedding. Each word is therefore represented as the concatenation of the character-composition vector and word-level embedding. This is performed on both questions and documents, resulting in two matrices: the $\mathbf {Q}^e \in \mathbb {R} ^{N\times d_w}$ for a question and the $\mathbf {D}^e \in \mathbb {R} ^{M\times d_w}$ for a document, where $N$ is the question length (number of word tokens), $M$ is the document length, and $d_w$ is the embedding dimensionality.
The above word representation focuses on representing individual words, and an input encoder here employs recurrent neural networks to obtain the representation of a word under its context. We use bi-directional GRU (BiGRU) BIBREF21 for both documents and questions.
$${\mathbf {Q}^c_i}&=\text{BiGRU}(\mathbf {Q}^e_i,i),\forall i \in [1, \dots , N] \\ {\mathbf {D}^c_j}&=\text{BiGRU}(\mathbf {D}^e_j,j),\forall j \in [1, \dots , M]$$ (Eq. 5)
A BiGRU runs a forward and backward GRU on a sequence starting from the left and the right end, respectively. By concatenating the hidden states of these two GRUs for each word, we obtain the a representation for a question or document: $\mathbf {Q}^c \in \mathbb {R} ^{N\times d_c}$ for a question and $\mathbf {D}^c \in \mathbb {R} ^{M\times d_c}$ for a document.
Questions and documents interact closely. As in most previous work, our framework use both soft attention over questions and that over documents to capture the interaction between them. More specifically, in this soft-alignment layer, we first feed the contextual representation matrix $\mathbf {Q}^c$ and $\mathbf {D}^c$ to obtain alignment matrix $\mathbf {U} \in \mathbb {R} ^{N\times M}$ :
$$\mathbf {U}_{ij} =\mathbf {Q}_i^c \cdot \mathbf {D}_j^{c\mathrm {T}}, \forall i \in [1, \dots , N], \forall j \in [1, \dots , M]$$ (Eq. 7)
Each $\mathbf {U}_{ij}$ represents the similarity between a question word $\mathbf {Q}_i^c$ and a document word $\mathbf {D}_j^c$ .
Word-level Q-code Similar as in BIBREF12 , we obtain a word-level Q-code. Specifically, for each document word $w_j$ , we find which words in the question are relevant to it. To this end, $\mathbf {a}_j\in \mathbb {R} ^{N}$ is computed with the following equation and used as a soft attention weight:
$$\mathbf {a}_j = softmax(\mathbf {U}_{:j}), \forall j \in [1, \dots , M]$$ (Eq. 8)
With the attention weights computed, we obtain the encoding of the question for each document word $w_j$ as follows, which we call word-level Q-code in this paper:
$$\mathbf {Q}^w=\mathbf {a}^{\mathrm {T}} \cdot \mathbf {Q}^{c} \in \mathbb {R} ^{M\times d_c}$$ (Eq. 9)
Question-based filtering To better explore question understanding, we design this question-based filtering layer. As detailed later, different question representation can be easily incorporated to this layer in addition to being used as a filter to find key information in the document based on the question. This layer is expandable with more complicated question modeling.
In the basic form of question-based filtering, for each question word $w_i$ , we find which words in the document are associated. Similar to $\mathbf {a}_j$ discussed above, we can obtain the attention weights on document words for each question word $w_i$ :
$$\mathbf {b}_i=softmax(\mathbf {U}_{i:})\in \mathbb {R} ^{M}, \forall i \in [1, \dots , N]$$ (Eq. 10)
By pooling $\mathbf {b}\in \mathbb {R} ^{N\times M}$ , we can obtain a question-based filtering weight $\mathbf {b}^f$ :
$$\mathbf {b}^f=norm(pooling(\mathbf {b})) \in \mathbb {R} ^{M}$$ (Eq. 11)
$$norm(\mathbf {x})=\frac{\mathbf {x}}{\sum _i x_i}$$ (Eq. 12)
where the specific pooling function we used include max-pooling and mean-pooling. Then the document softly filtered based on the corresponding question $\mathbf {D}^f$ can be calculated by:
$$\mathbf {D}_j^{f_{max}}=b^{f_{max}}_j \mathbf {D}_j^{c}, \forall j \in [1, \dots , M]$$ (Eq. 13)
$$\mathbf {D}_j^{f_{mean}}=b^{f_{mean}}_j \mathbf {D}_j^{c}, \forall j \in [1, \dots , M]$$ (Eq. 14)
Through concatenating the document representation $\mathbf {D}^c$ , word-level Q-code $\mathbf {Q}^w$ and question-filtered document $\mathbf {D}^f$ , we can finally obtain the alignment layer representation:
$$\mathbf {I}=[\mathbf {D}^c, \mathbf {Q}^w,\mathbf {D}^c \circ \mathbf {Q}^w,\mathbf {D}^c - \mathbf {Q}^w, \mathbf {D}^f, \mathbf {b}^{f_{max}}, \mathbf {b}^{f_{mean}}] \in \mathbb {R} ^{M \times (6d_c+2)}$$ (Eq. 16)
where " $\circ $ " stands for element-wise multiplication and " $-$ " is simply the vector subtraction.
After acquiring the local alignment representation, key information in document and question has been collected, and the aggregation layer is then performed to find answers. We use three BiGRU layers to model the process that aggregates local information to make the global decision to find the answer spans. We found a residual architecture BIBREF22 as described in Figure 2 is very effective in this aggregation process:
$$\mathbf {I}^1_i=\text{BiGRU}(\mathbf {I}_i)$$ (Eq. 18)
$$\mathbf {I}^2_i=\mathbf {I}^1_i + \text{BiGRU}(\mathbf {I}^1_i)$$ (Eq. 19)
The SQuAD QA task requires a span of text to answer a question. We use a pointer network BIBREF7 to predict the starting and end position of answers as in BIBREF6 . Different from their methods, we use a two-directional prediction to obtain the positions. For one direction, we first predict the starting position of the answer span followed by predicting the end position, which is implemented with the following equations:
$$P(s+)=softmax(W_{s+}\cdot I^3)$$ (Eq. 23)
$$P(e+)=softmax(W_{e+} \cdot I^3 + W_{h+} \cdot h_{s+})$$ (Eq. 24)
where $\mathbf {I}^3$ is inference layer output, $\mathbf {h}_{s+}$ is the hidden state of the first step, and all $\mathbf {W}$ are trainable matrices. We also perform this by predicting the end position first and then the starting position:
$$P(e-)=softmax(W_{e-}\cdot I^3)$$ (Eq. 25)
$$P(s-)=softmax(W_{s-} \cdot I^3 + W_{h-} \cdot h_{e-})$$ (Eq. 26)
We finally identify the span of an answer with the following equation:
$$P(s)=pooling([P(s+), P(s-)])$$ (Eq. 27)
$$P(e)=pooling([P(e+), P(e-)])$$ (Eq. 28)
We use the mean-pooling here as it is more effective on the development set than the alternatives such as the max-pooling.
Question Understanding and Adaptation
The interplay of syntax and semantics of natural language questions is of interest for question representation. We attempt to incorporate syntactic information in questions representation with TreeLSTM BIBREF13 , BIBREF14 . In general a TreeLSTM could perform semantic composition over given syntactic structures.
Unlike the chain-structured LSTM BIBREF17 , the TreeLSTM captures long-distance interaction on a tree. The update of a TreeLSTM node is described at a high level with Equation ( 31 ), and the detailed computation is described in (–). Specifically, the input of a TreeLSTM node is used to configure four gates: the input gate $\mathbf {i}_t$ , output gate $\mathbf {o}_t$ , and the two forget gates $\mathbf {f}_t^L$ for the left child input and $\mathbf {f}_t^R$ for the right. The memory cell $\mathbf {c}_t$ considers each child's cell vector, $\mathbf {c}_{t-1}^L$ and $\mathbf {c}_{t-1}^R$ , which are gated by the left forget gate $\mathbf {f}_t^L$ and right forget gate $\mathbf {f}_t^R$ , respectively.
$$\mathbf {h}_t &= \text{TreeLSTM}(\mathbf {x}_t, \mathbf {h}_{t-1}^L, \mathbf {h}_{t-1}^R), \\ \mathbf {h}_t &= \mathbf {o}_t \circ \tanh (\mathbf {c}_{t}),\\ \mathbf {o}_t &= \sigma (\mathbf {W}_o \mathbf {x}_t + \mathbf {U}_o^L \mathbf {h}_{t-1}^L + \mathbf {U}_o^R \mathbf {h}_{t-1}^R), \\\mathbf {c}_t &= \mathbf {f}_t^L \circ \mathbf {c}_{t-1}^L + \mathbf {f}_t^R \circ \mathbf {c}_{t-1}^R + \mathbf {i}_t \circ \mathbf {u}_t, \\\mathbf {f}_t^L &= \sigma (\mathbf {W}_f \mathbf {x}_t + \mathbf {U}_f^{LL} \mathbf {h}_{t-1}^L + \mathbf {U}_f^{LR} \mathbf {h}_{t-1}^R),\\ \mathbf {f}_t^R &= \sigma (\mathbf {W}_f \mathbf {x}_t + \mathbf {U}_f^{RL} \mathbf {h}_{t-1}^L + \mathbf {U}_f^{RR} \mathbf {h}_{t-1}^R), \\\mathbf {i}_t &= \sigma (\mathbf {W}_i \mathbf {x}_t + \mathbf {U}_i^L \mathbf {h}_{t-1}^L + \mathbf {U}_i^R \mathbf {h}_{t-1}^R), \\\mathbf {u}_t &= \tanh (\mathbf {W}_c \mathbf {x}_t + \mathbf {U}_c^L \mathbf {h}_{t-1}^L + \mathbf {U}_c^R \mathbf {h}_{t-1}^R),$$ (Eq. 31)
where $\sigma $ is the sigmoid function, $\circ $ is the element-wise multiplication of two vectors, and all $\mathbf {W}$ , $\mathbf {U}$ are trainable matrices.
To obtain the parse tree information, we use Stanford CoreNLP (PCFG Parser) BIBREF23 , BIBREF24 to produce a binarized constituency parse for each question and build the TreeLSTM based on the parse tree. The root node of TreeLSTM is used as the representation for the whole question. More specifically, we use it as TreeLSTM Q-code $\mathbf {Q}^{TL}\in \mathbb {R} ^{d_c}$ , by not only simply concatenating it to the alignment layer output but also using it as a question filter, just as we discussed in the question-based filtering section:
$$\mathbf {Q}^{TL}=\text{TreeLSTM}(\mathbf {Q}^e) \in \mathbb {R} ^{d_c}$$ (Eq. 32)
$$\mathbf {b}^{TL}=norm(\mathbf {Q}^{TL} \cdot \mathbf {D}^{c\mathrm {T}}) \in \mathbb {R} ^{M}$$ (Eq. 33)
where $\mathbf {I}_{new}$ is the new output of alignment layer, and function $repmat$ copies $\mathbf {Q}^{TL}$ for M times to fit with $\mathbf {I}$ .
Questions by nature are often composed to fulfill different types of information needs. For example, a "when" question seeks for different types of information (i.e., temporal information) than those for a "why" question. Different types of questions and the corresponding answers could potentially have different distributional regularity.
The previous models are often trained for all questions without explicitly discriminating different question types; however, for a target question, both the common features shared by all questions and the specific features for a specific type of question are further considered in this paper, as they could potentially obey different distributions. In this paper we further explicitly model different types of questions in the end-to-end training. We start from a simple way to first analyze the word frequency of all questions, and obtain top-10 most frequent question types: what, how, who, when, which, where, why, be, whose, and whom, in which be stands for the questions beginning with different forms of the word be such as is, am, and are. We explicitly encode question-type information to be an 11-dimensional one-hot vector (the top-10 question types and "other" question type). Each question type is with a trainable embedding vector. We call this explicit question type code, $\mathbf {ET}\in \mathbb {R} ^{d_{ET}}$ . Then the vector for each question type is tuned during training, and is added to the system with the following equation:
$$\mathbf {I}_{new}=[\mathbf {I}, repmat(\mathbf {ET})]$$ (Eq. 38)
As discussed, different types of questions and their answers may share common regularity and have separate property at the same time. We also view this as an adaptation problem in order to let different types of questions share a basic model but still discriminate them when needed. Specifically, we borrow ideas from speaker adaptation BIBREF18 in speech recognition, where neural-network-based adaptation is performed among different groups of speakers.
Conceptually we regard a type of questions as a group of acoustically similar speakers. Specifically we propose a question discriminative block or simply called a discriminative block (Figure 3 ) below to perform question adaptation. The main idea is described below:
$$\mathbf {x^\prime } = f([\mathbf {x}, \mathbf {\bar{x}}^c, \mathbf {\delta _x}])$$ (Eq. 40)
For each input question $\mathbf {x}$ , we can decompose it to two parts: the cluster it belong(i.e., question type) and the diverse in the cluster. The information of the cluster is encoded in a vector $\mathbf {\bar{x}}^c$ . In order to keep calculation differentiable, we compute the weight of all the clusters based on the distances of $\mathbf {x}$ and each cluster center vector, in stead of just choosing the closest cluster. Then the discriminative vector $\mathbf {\delta _x}$ with regard to these most relevant clusters are computed. All this information is combined to obtain the discriminative information. In order to keep the full information of input, we also copy the input question $\mathbf {x}$ , together with the acquired discriminative information, to a feed-forward layer to obtain a new representation $\mathbf {x^\prime }$ for the question.
More specifically, the adaptation algorithm contains two steps: adapting and updating, which is detailed as follows:
Adapting In the adapting step, we first compute the similarity score between an input question vector $\mathbf {x}\in \mathbb {R} ^{h}$ and each centroid vector of $K$ clusters $~\mathbf {\bar{x}}\in \mathbb {R} ^{K \times h}$ . Each cluster here models a question type. Unlike the explicit question type modeling discussed above, here we do not specify what question types we are modeling but let the system to learn. Specifically, we only need to pre-specific how many clusters, $K$ , we are modeling. The similarity between an input question and cluster centroid can be used to compute similarity weight $\mathbf {w}^a$ :
$$w_k^a = softmax(cos\_sim(\mathbf {x}, \mathbf {\bar{x}}_k), \alpha ), \forall k \in [1, \dots , K]$$ (Eq. 43)
$$cos\_sim(\mathbf {u}, \mathbf {v}) = \frac{<\mathbf {u},\mathbf {v}>}{||\mathbf {u}|| \cdot ||\mathbf {v}||}$$ (Eq. 44)
We set $\alpha $ equals 50 to make sure only closest class will have a high weight while maintain differentiable. Then we acquire a soft class-center vector $\mathbf {\bar{x}}^c$ :
$$\mathbf {\bar{x}}^c = \sum _k w^a_k \mathbf {\bar{x}}_k \in \mathbb {R} ^{h}$$ (Eq. 46)
We then compute a discriminative vector $\mathbf {\delta _x}$ between the input question with regard to the soft class-center vector:
$$\mathbf {\delta _x} = \mathbf {x} - \mathbf {\bar{x}}^c$$ (Eq. 47)
Note that $\bar{\mathbf {x}}^c$ here models the cluster information and $\mathbf {\delta _x}$ represents the discriminative information in the cluster. By feeding $\mathbf {x}$ , $\bar{\mathbf {x}}^c$ and $\mathbf {\delta _x}$ into feedforward layer with Relu, we obtain $\mathbf {x^{\prime }}\in \mathbb {R} ^{K}$ :
$$\mathbf {x^{\prime }} = Relu(\mathbf {W} \cdot [\mathbf {x},\bar{\mathbf {x}}^c,\mathbf {\delta _x}])$$ (Eq. 48)
With $\mathbf {x^{\prime }}$ ready, we can apply Discriminative Block to any question code and obtain its adaptation Q-code. In this paper, we use TreeLSTM Q-code as the input vector $\mathbf {x}$ , and obtain TreeLSTM adaptation Q-code $\mathbf {Q}^{TLa}\in \mathbb {R} ^{d_c}$ . Similar to TreeLSTM Q-code $\mathbf {Q}^{TL}$ , we concatenate $\mathbf {Q}^{TLa}$ to alignment output $\mathbf {I}$ and also use it as a question filter:
$$\mathbf {Q}^{TLa} = Relu(\mathbf {W} \cdot [\mathbf {Q}^{TL},\overline{\mathbf {Q}^{TL}}^c,\mathbf {\delta _{\mathbf {Q}^{TL}}}])$$ (Eq. 49)
$$\mathbf {b}^{TLa}=norm(\mathbf {Q}^{TLa} \cdot \mathbf {D}^{c\mathrm {T}}) \in \mathbb {R} ^{M}$$ (Eq. 50)
Updating The updating stage attempts to modify the center vectors of the $K$ clusters in order to fit each cluster to model different types of questions. The updating is performed according to the following formula:
$$\mathbf {\bar{x}^{\prime }}_k = (1-\beta \text{w}_k^a)\mathbf {\bar{x}}_k+\beta \text{w}_k^a\mathbf {x}, \forall k \in [1, \dots , K]$$ (Eq. 54)
In the equation, $\beta $ is an updating rate used to control the amount of each updating, and we set it to 0.01. When $\mathbf {x}$ is far away from $K$ -th cluster center $\mathbf {\bar{x}}_k$ , $\text{w}_k^a$ is close to be value 0 and the $k$ -th cluster center $\mathbf {\bar{x}}_k$ tends not to be updated. If $\mathbf {x}$ is instead close to the $j$ -th cluster center $\mathbf {\bar{x}}_j$ , $\mathbf {x}$0 is close to the value 1 and the centroid of the $\mathbf {x}$1 -th cluster $\mathbf {x}$2 will be updated more aggressively using $\mathbf {x}$3 .
Set-Up
We test our models on Stanford Question Answering Dataset (SQuAD) BIBREF3 . The SQuAD dataset consists of more than 100,000 questions annotated by crowdsourcing workers on a selected set of Wikipedia articles, and the answer to each question is a span of text in the Wikipedia articles. Training data includes 87,599 instances and validation set has 10,570 instances. The test data is hidden and kept by the organizer. The evaluation of SQuAD is Exact Match (EM) and F1 score.
We use pre-trained 300-D Glove 840B vectors BIBREF20 to initialize our word embeddings. Out-of-vocabulary (OOV) words are initialized randomly with Gaussian samples. CharCNN filter length is 1,3,5, each is 50 dimensions. All vectors including word embedding are updated during training. The cluster number K in discriminative block is 100. The Adam method BIBREF25 is used for optimization. And the first momentum is set to be 0.9 and the second 0.999. The initial learning rate is 0.0004 and the batch size is 32. We will half learning rate when meet a bad iteration, and the patience is 7. Our early stop evaluation is the EM and F1 score of validation set. All hidden states of GRUs, and TreeLSTMs are 500 dimensions, while word-level embedding $d_w$ is 300 dimensions. We set max length of document to 500, and drop the question-document pairs beyond this on training set. Explicit question-type dimension $d_{ET}$ is 50. We apply dropout to the Encoder layer and aggregation layer with a dropout rate of 0.5.
Results
Table 1 shows the official leaderboard on SQuAD test set when we submitted our system. Our model achieves a 68.73% EM score and 77.39% F1 score, which is ranked among the state of the art single models (without model ensembling).
Table 2 shows the ablation performances of various Q-code on the development set. Note that since the testset is hidden from us, we can only perform such an analysis on the development set. Our baseline model using no Q-code achieved a 68.00% and 77.36% EM and F1 scores, respectively. When we added the explicit question type T-code into the baseline model, the performance was improved slightly to 68.16%(EM) and 77.58%(F1). We then used TreeLSTM introduce syntactic parses for question representation and understanding (replacing simple question type as question understanding Q-code), which consistently shows further improvement. We further incorporated the soft adaptation. When letting the number of hidden question types ( $K$ ) to be 20, the performance improves to 68.73%/77.74% on EM and F1, respectively, which corresponds to the results of our model reported in Table 1 . Furthermore, after submitted our result, we have experimented with a large value of $K$ and found that when $K=100$ , we can achieve a better performance of 69.10%/78.38% on the development set.
Figure UID61 shows the EM/F1 scores of different question types while Figure UID62 is the question type amount distribution on the development set. In Figure UID61 we can see that the average EM/F1 of the "when" question is highest and those of the "why" question is the lowest. From Figure UID62 we can see the "what" question is the major class.
Figure 5 shows the composition of F1 score. Take our best model as an example, we observed a 78.38% F1 score on the whole development set, which can be separated into two parts: one is where F1 score equals to 100%, which means an exact match. This part accounts for 69.10% of the entire development set. And the other part accounts for 30.90%, of which the average F1 score is 30.03%. For the latter, we can further divide it into two sub-parts: one is where the F1 score equals to 0%, which means that predict answer is totally wrong. This part occupies 14.89% of the total development set. The other part accounts for 16.01% of the development set, of which average F1 score is 57.96%. From this analysis we can see that reducing the zero F1 score (14.89%) is potentially an important direction to further improve the system.
Conclusions
Closely modelling questions could be of importance for question answering and machine reading. In this paper, we introduce syntactic information to help encode questions in neural networks. We view and model different types of questions and the information shared among them as an adaptation task and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results over a competitive baseline. | machine comprehension |
73738e42d488b32c9db89ac8adefc75403fa2653 | 73738e42d488b32c9db89ac8adefc75403fa2653_0 | Q: how much of improvement the adaptation model can get?
Text: Introduction
Enabling computers to understand given documents and answer questions about their content has recently attracted intensive interest, including but not limited to the efforts as in BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Many specific problems such as machine comprehension and question answering often involve modeling such question-document pairs.
The recent availability of relatively large training datasets (see Section "Related Work" for more details) has made it more feasible to train and estimate rather complex models in an end-to-end fashion for these problems, in which a whole model is fit directly with given question-answer tuples and the resulting model has shown to be rather effective.
In this paper, we take a closer look at modeling questions in such an end-to-end neural network framework, since we regard question understanding is of importance for such problems. We first introduced syntactic information to help encode questions. We then viewed and modelled different types of questions and the information shared among them as an adaptation problem and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results on our competitive baselines.
Related Work
Recent advance on reading comprehension and question answering has been closely associated with the availability of various datasets. BIBREF0 released the MCTest data consisting of 500 short, fictional open-domain stories and 2000 questions. The CNN/Daily Mail dataset BIBREF1 contains news articles for close style machine comprehension, in which only entities are removed and tested for comprehension. Children's Book Test (CBT) BIBREF2 leverages named entities, common nouns, verbs, and prepositions to test reading comprehension. The Stanford Question Answering Dataset (SQuAD) BIBREF3 is more recently released dataset, which consists of more than 100,000 questions for documents taken from Wikipedia across a wide range of topics. The question-answer pairs are annotated through crowdsourcing. Answers are spans of text marked in the original documents. In this paper, we use SQuAD to evaluate our models.
Many neural network models have been studied on the SQuAD task. BIBREF6 proposed match LSTM to associate documents and questions and adapted the so-called pointer Network BIBREF7 to determine the positions of the answer text spans. BIBREF8 proposed a dynamic chunk reader to extract and rank a set of answer candidates. BIBREF9 focused on word representation and presented a fine-grained gating mechanism to dynamically combine word-level and character-level representations based on the properties of words. BIBREF10 proposed a multi-perspective context matching (MPCM) model, which matched an encoded document and question from multiple perspectives. BIBREF11 proposed a dynamic decoder and so-called highway maxout network to improve the effectiveness of the decoder. The bi-directional attention flow (BIDAF) BIBREF12 used the bi-directional attention to obtain a question-aware context representation.
In this paper, we introduce syntactic information to encode questions with a specific form of recursive neural networks BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . More specifically, we explore a tree-structured LSTM BIBREF13 , BIBREF14 which extends the linear-chain long short-term memory (LSTM) BIBREF17 to a recursive structure, which has the potential to capture long-distance interactions over the structures.
Different types of questions are often used to seek for different types of information. For example, a "what" question could have very different property from that of a "why" question, while they may share information and need to be trained together instead of separately. We view this as a "adaptation" problem to let different types of questions share a basic model but still discriminate them when needed. Specifically, we are motivated by the ideas "i-vector" BIBREF18 in speech recognition, where neural network based adaptation is performed among different (groups) of speakers and we focused instead on different types of questions here.
The Baseline Model
Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction. Below we discuss these components in more details.
We concatenate embedding at two levels to represent a word: the character composition and word-level embedding. The character composition feeds all characters of a word into a convolutional neural network (CNN) BIBREF19 to obtain a representation for the word. And we use the pre-trained 300-D GloVe vectors BIBREF20 (see the experiment section for details) to initialize our word-level embedding. Each word is therefore represented as the concatenation of the character-composition vector and word-level embedding. This is performed on both questions and documents, resulting in two matrices: the $\mathbf {Q}^e \in \mathbb {R} ^{N\times d_w}$ for a question and the $\mathbf {D}^e \in \mathbb {R} ^{M\times d_w}$ for a document, where $N$ is the question length (number of word tokens), $M$ is the document length, and $d_w$ is the embedding dimensionality.
The above word representation focuses on representing individual words, and an input encoder here employs recurrent neural networks to obtain the representation of a word under its context. We use bi-directional GRU (BiGRU) BIBREF21 for both documents and questions.
$${\mathbf {Q}^c_i}&=\text{BiGRU}(\mathbf {Q}^e_i,i),\forall i \in [1, \dots , N] \\ {\mathbf {D}^c_j}&=\text{BiGRU}(\mathbf {D}^e_j,j),\forall j \in [1, \dots , M]$$ (Eq. 5)
A BiGRU runs a forward and backward GRU on a sequence starting from the left and the right end, respectively. By concatenating the hidden states of these two GRUs for each word, we obtain the a representation for a question or document: $\mathbf {Q}^c \in \mathbb {R} ^{N\times d_c}$ for a question and $\mathbf {D}^c \in \mathbb {R} ^{M\times d_c}$ for a document.
Questions and documents interact closely. As in most previous work, our framework use both soft attention over questions and that over documents to capture the interaction between them. More specifically, in this soft-alignment layer, we first feed the contextual representation matrix $\mathbf {Q}^c$ and $\mathbf {D}^c$ to obtain alignment matrix $\mathbf {U} \in \mathbb {R} ^{N\times M}$ :
$$\mathbf {U}_{ij} =\mathbf {Q}_i^c \cdot \mathbf {D}_j^{c\mathrm {T}}, \forall i \in [1, \dots , N], \forall j \in [1, \dots , M]$$ (Eq. 7)
Each $\mathbf {U}_{ij}$ represents the similarity between a question word $\mathbf {Q}_i^c$ and a document word $\mathbf {D}_j^c$ .
Word-level Q-code Similar as in BIBREF12 , we obtain a word-level Q-code. Specifically, for each document word $w_j$ , we find which words in the question are relevant to it. To this end, $\mathbf {a}_j\in \mathbb {R} ^{N}$ is computed with the following equation and used as a soft attention weight:
$$\mathbf {a}_j = softmax(\mathbf {U}_{:j}), \forall j \in [1, \dots , M]$$ (Eq. 8)
With the attention weights computed, we obtain the encoding of the question for each document word $w_j$ as follows, which we call word-level Q-code in this paper:
$$\mathbf {Q}^w=\mathbf {a}^{\mathrm {T}} \cdot \mathbf {Q}^{c} \in \mathbb {R} ^{M\times d_c}$$ (Eq. 9)
Question-based filtering To better explore question understanding, we design this question-based filtering layer. As detailed later, different question representation can be easily incorporated to this layer in addition to being used as a filter to find key information in the document based on the question. This layer is expandable with more complicated question modeling.
In the basic form of question-based filtering, for each question word $w_i$ , we find which words in the document are associated. Similar to $\mathbf {a}_j$ discussed above, we can obtain the attention weights on document words for each question word $w_i$ :
$$\mathbf {b}_i=softmax(\mathbf {U}_{i:})\in \mathbb {R} ^{M}, \forall i \in [1, \dots , N]$$ (Eq. 10)
By pooling $\mathbf {b}\in \mathbb {R} ^{N\times M}$ , we can obtain a question-based filtering weight $\mathbf {b}^f$ :
$$\mathbf {b}^f=norm(pooling(\mathbf {b})) \in \mathbb {R} ^{M}$$ (Eq. 11)
$$norm(\mathbf {x})=\frac{\mathbf {x}}{\sum _i x_i}$$ (Eq. 12)
where the specific pooling function we used include max-pooling and mean-pooling. Then the document softly filtered based on the corresponding question $\mathbf {D}^f$ can be calculated by:
$$\mathbf {D}_j^{f_{max}}=b^{f_{max}}_j \mathbf {D}_j^{c}, \forall j \in [1, \dots , M]$$ (Eq. 13)
$$\mathbf {D}_j^{f_{mean}}=b^{f_{mean}}_j \mathbf {D}_j^{c}, \forall j \in [1, \dots , M]$$ (Eq. 14)
Through concatenating the document representation $\mathbf {D}^c$ , word-level Q-code $\mathbf {Q}^w$ and question-filtered document $\mathbf {D}^f$ , we can finally obtain the alignment layer representation:
$$\mathbf {I}=[\mathbf {D}^c, \mathbf {Q}^w,\mathbf {D}^c \circ \mathbf {Q}^w,\mathbf {D}^c - \mathbf {Q}^w, \mathbf {D}^f, \mathbf {b}^{f_{max}}, \mathbf {b}^{f_{mean}}] \in \mathbb {R} ^{M \times (6d_c+2)}$$ (Eq. 16)
where " $\circ $ " stands for element-wise multiplication and " $-$ " is simply the vector subtraction.
After acquiring the local alignment representation, key information in document and question has been collected, and the aggregation layer is then performed to find answers. We use three BiGRU layers to model the process that aggregates local information to make the global decision to find the answer spans. We found a residual architecture BIBREF22 as described in Figure 2 is very effective in this aggregation process:
$$\mathbf {I}^1_i=\text{BiGRU}(\mathbf {I}_i)$$ (Eq. 18)
$$\mathbf {I}^2_i=\mathbf {I}^1_i + \text{BiGRU}(\mathbf {I}^1_i)$$ (Eq. 19)
The SQuAD QA task requires a span of text to answer a question. We use a pointer network BIBREF7 to predict the starting and end position of answers as in BIBREF6 . Different from their methods, we use a two-directional prediction to obtain the positions. For one direction, we first predict the starting position of the answer span followed by predicting the end position, which is implemented with the following equations:
$$P(s+)=softmax(W_{s+}\cdot I^3)$$ (Eq. 23)
$$P(e+)=softmax(W_{e+} \cdot I^3 + W_{h+} \cdot h_{s+})$$ (Eq. 24)
where $\mathbf {I}^3$ is inference layer output, $\mathbf {h}_{s+}$ is the hidden state of the first step, and all $\mathbf {W}$ are trainable matrices. We also perform this by predicting the end position first and then the starting position:
$$P(e-)=softmax(W_{e-}\cdot I^3)$$ (Eq. 25)
$$P(s-)=softmax(W_{s-} \cdot I^3 + W_{h-} \cdot h_{e-})$$ (Eq. 26)
We finally identify the span of an answer with the following equation:
$$P(s)=pooling([P(s+), P(s-)])$$ (Eq. 27)
$$P(e)=pooling([P(e+), P(e-)])$$ (Eq. 28)
We use the mean-pooling here as it is more effective on the development set than the alternatives such as the max-pooling.
Question Understanding and Adaptation
The interplay of syntax and semantics of natural language questions is of interest for question representation. We attempt to incorporate syntactic information in questions representation with TreeLSTM BIBREF13 , BIBREF14 . In general a TreeLSTM could perform semantic composition over given syntactic structures.
Unlike the chain-structured LSTM BIBREF17 , the TreeLSTM captures long-distance interaction on a tree. The update of a TreeLSTM node is described at a high level with Equation ( 31 ), and the detailed computation is described in (–). Specifically, the input of a TreeLSTM node is used to configure four gates: the input gate $\mathbf {i}_t$ , output gate $\mathbf {o}_t$ , and the two forget gates $\mathbf {f}_t^L$ for the left child input and $\mathbf {f}_t^R$ for the right. The memory cell $\mathbf {c}_t$ considers each child's cell vector, $\mathbf {c}_{t-1}^L$ and $\mathbf {c}_{t-1}^R$ , which are gated by the left forget gate $\mathbf {f}_t^L$ and right forget gate $\mathbf {f}_t^R$ , respectively.
$$\mathbf {h}_t &= \text{TreeLSTM}(\mathbf {x}_t, \mathbf {h}_{t-1}^L, \mathbf {h}_{t-1}^R), \\ \mathbf {h}_t &= \mathbf {o}_t \circ \tanh (\mathbf {c}_{t}),\\ \mathbf {o}_t &= \sigma (\mathbf {W}_o \mathbf {x}_t + \mathbf {U}_o^L \mathbf {h}_{t-1}^L + \mathbf {U}_o^R \mathbf {h}_{t-1}^R), \\\mathbf {c}_t &= \mathbf {f}_t^L \circ \mathbf {c}_{t-1}^L + \mathbf {f}_t^R \circ \mathbf {c}_{t-1}^R + \mathbf {i}_t \circ \mathbf {u}_t, \\\mathbf {f}_t^L &= \sigma (\mathbf {W}_f \mathbf {x}_t + \mathbf {U}_f^{LL} \mathbf {h}_{t-1}^L + \mathbf {U}_f^{LR} \mathbf {h}_{t-1}^R),\\ \mathbf {f}_t^R &= \sigma (\mathbf {W}_f \mathbf {x}_t + \mathbf {U}_f^{RL} \mathbf {h}_{t-1}^L + \mathbf {U}_f^{RR} \mathbf {h}_{t-1}^R), \\\mathbf {i}_t &= \sigma (\mathbf {W}_i \mathbf {x}_t + \mathbf {U}_i^L \mathbf {h}_{t-1}^L + \mathbf {U}_i^R \mathbf {h}_{t-1}^R), \\\mathbf {u}_t &= \tanh (\mathbf {W}_c \mathbf {x}_t + \mathbf {U}_c^L \mathbf {h}_{t-1}^L + \mathbf {U}_c^R \mathbf {h}_{t-1}^R),$$ (Eq. 31)
where $\sigma $ is the sigmoid function, $\circ $ is the element-wise multiplication of two vectors, and all $\mathbf {W}$ , $\mathbf {U}$ are trainable matrices.
To obtain the parse tree information, we use Stanford CoreNLP (PCFG Parser) BIBREF23 , BIBREF24 to produce a binarized constituency parse for each question and build the TreeLSTM based on the parse tree. The root node of TreeLSTM is used as the representation for the whole question. More specifically, we use it as TreeLSTM Q-code $\mathbf {Q}^{TL}\in \mathbb {R} ^{d_c}$ , by not only simply concatenating it to the alignment layer output but also using it as a question filter, just as we discussed in the question-based filtering section:
$$\mathbf {Q}^{TL}=\text{TreeLSTM}(\mathbf {Q}^e) \in \mathbb {R} ^{d_c}$$ (Eq. 32)
$$\mathbf {b}^{TL}=norm(\mathbf {Q}^{TL} \cdot \mathbf {D}^{c\mathrm {T}}) \in \mathbb {R} ^{M}$$ (Eq. 33)
where $\mathbf {I}_{new}$ is the new output of alignment layer, and function $repmat$ copies $\mathbf {Q}^{TL}$ for M times to fit with $\mathbf {I}$ .
Questions by nature are often composed to fulfill different types of information needs. For example, a "when" question seeks for different types of information (i.e., temporal information) than those for a "why" question. Different types of questions and the corresponding answers could potentially have different distributional regularity.
The previous models are often trained for all questions without explicitly discriminating different question types; however, for a target question, both the common features shared by all questions and the specific features for a specific type of question are further considered in this paper, as they could potentially obey different distributions. In this paper we further explicitly model different types of questions in the end-to-end training. We start from a simple way to first analyze the word frequency of all questions, and obtain top-10 most frequent question types: what, how, who, when, which, where, why, be, whose, and whom, in which be stands for the questions beginning with different forms of the word be such as is, am, and are. We explicitly encode question-type information to be an 11-dimensional one-hot vector (the top-10 question types and "other" question type). Each question type is with a trainable embedding vector. We call this explicit question type code, $\mathbf {ET}\in \mathbb {R} ^{d_{ET}}$ . Then the vector for each question type is tuned during training, and is added to the system with the following equation:
$$\mathbf {I}_{new}=[\mathbf {I}, repmat(\mathbf {ET})]$$ (Eq. 38)
As discussed, different types of questions and their answers may share common regularity and have separate property at the same time. We also view this as an adaptation problem in order to let different types of questions share a basic model but still discriminate them when needed. Specifically, we borrow ideas from speaker adaptation BIBREF18 in speech recognition, where neural-network-based adaptation is performed among different groups of speakers.
Conceptually we regard a type of questions as a group of acoustically similar speakers. Specifically we propose a question discriminative block or simply called a discriminative block (Figure 3 ) below to perform question adaptation. The main idea is described below:
$$\mathbf {x^\prime } = f([\mathbf {x}, \mathbf {\bar{x}}^c, \mathbf {\delta _x}])$$ (Eq. 40)
For each input question $\mathbf {x}$ , we can decompose it to two parts: the cluster it belong(i.e., question type) and the diverse in the cluster. The information of the cluster is encoded in a vector $\mathbf {\bar{x}}^c$ . In order to keep calculation differentiable, we compute the weight of all the clusters based on the distances of $\mathbf {x}$ and each cluster center vector, in stead of just choosing the closest cluster. Then the discriminative vector $\mathbf {\delta _x}$ with regard to these most relevant clusters are computed. All this information is combined to obtain the discriminative information. In order to keep the full information of input, we also copy the input question $\mathbf {x}$ , together with the acquired discriminative information, to a feed-forward layer to obtain a new representation $\mathbf {x^\prime }$ for the question.
More specifically, the adaptation algorithm contains two steps: adapting and updating, which is detailed as follows:
Adapting In the adapting step, we first compute the similarity score between an input question vector $\mathbf {x}\in \mathbb {R} ^{h}$ and each centroid vector of $K$ clusters $~\mathbf {\bar{x}}\in \mathbb {R} ^{K \times h}$ . Each cluster here models a question type. Unlike the explicit question type modeling discussed above, here we do not specify what question types we are modeling but let the system to learn. Specifically, we only need to pre-specific how many clusters, $K$ , we are modeling. The similarity between an input question and cluster centroid can be used to compute similarity weight $\mathbf {w}^a$ :
$$w_k^a = softmax(cos\_sim(\mathbf {x}, \mathbf {\bar{x}}_k), \alpha ), \forall k \in [1, \dots , K]$$ (Eq. 43)
$$cos\_sim(\mathbf {u}, \mathbf {v}) = \frac{<\mathbf {u},\mathbf {v}>}{||\mathbf {u}|| \cdot ||\mathbf {v}||}$$ (Eq. 44)
We set $\alpha $ equals 50 to make sure only closest class will have a high weight while maintain differentiable. Then we acquire a soft class-center vector $\mathbf {\bar{x}}^c$ :
$$\mathbf {\bar{x}}^c = \sum _k w^a_k \mathbf {\bar{x}}_k \in \mathbb {R} ^{h}$$ (Eq. 46)
We then compute a discriminative vector $\mathbf {\delta _x}$ between the input question with regard to the soft class-center vector:
$$\mathbf {\delta _x} = \mathbf {x} - \mathbf {\bar{x}}^c$$ (Eq. 47)
Note that $\bar{\mathbf {x}}^c$ here models the cluster information and $\mathbf {\delta _x}$ represents the discriminative information in the cluster. By feeding $\mathbf {x}$ , $\bar{\mathbf {x}}^c$ and $\mathbf {\delta _x}$ into feedforward layer with Relu, we obtain $\mathbf {x^{\prime }}\in \mathbb {R} ^{K}$ :
$$\mathbf {x^{\prime }} = Relu(\mathbf {W} \cdot [\mathbf {x},\bar{\mathbf {x}}^c,\mathbf {\delta _x}])$$ (Eq. 48)
With $\mathbf {x^{\prime }}$ ready, we can apply Discriminative Block to any question code and obtain its adaptation Q-code. In this paper, we use TreeLSTM Q-code as the input vector $\mathbf {x}$ , and obtain TreeLSTM adaptation Q-code $\mathbf {Q}^{TLa}\in \mathbb {R} ^{d_c}$ . Similar to TreeLSTM Q-code $\mathbf {Q}^{TL}$ , we concatenate $\mathbf {Q}^{TLa}$ to alignment output $\mathbf {I}$ and also use it as a question filter:
$$\mathbf {Q}^{TLa} = Relu(\mathbf {W} \cdot [\mathbf {Q}^{TL},\overline{\mathbf {Q}^{TL}}^c,\mathbf {\delta _{\mathbf {Q}^{TL}}}])$$ (Eq. 49)
$$\mathbf {b}^{TLa}=norm(\mathbf {Q}^{TLa} \cdot \mathbf {D}^{c\mathrm {T}}) \in \mathbb {R} ^{M}$$ (Eq. 50)
Updating The updating stage attempts to modify the center vectors of the $K$ clusters in order to fit each cluster to model different types of questions. The updating is performed according to the following formula:
$$\mathbf {\bar{x}^{\prime }}_k = (1-\beta \text{w}_k^a)\mathbf {\bar{x}}_k+\beta \text{w}_k^a\mathbf {x}, \forall k \in [1, \dots , K]$$ (Eq. 54)
In the equation, $\beta $ is an updating rate used to control the amount of each updating, and we set it to 0.01. When $\mathbf {x}$ is far away from $K$ -th cluster center $\mathbf {\bar{x}}_k$ , $\text{w}_k^a$ is close to be value 0 and the $k$ -th cluster center $\mathbf {\bar{x}}_k$ tends not to be updated. If $\mathbf {x}$ is instead close to the $j$ -th cluster center $\mathbf {\bar{x}}_j$ , $\mathbf {x}$0 is close to the value 1 and the centroid of the $\mathbf {x}$1 -th cluster $\mathbf {x}$2 will be updated more aggressively using $\mathbf {x}$3 .
Set-Up
We test our models on Stanford Question Answering Dataset (SQuAD) BIBREF3 . The SQuAD dataset consists of more than 100,000 questions annotated by crowdsourcing workers on a selected set of Wikipedia articles, and the answer to each question is a span of text in the Wikipedia articles. Training data includes 87,599 instances and validation set has 10,570 instances. The test data is hidden and kept by the organizer. The evaluation of SQuAD is Exact Match (EM) and F1 score.
We use pre-trained 300-D Glove 840B vectors BIBREF20 to initialize our word embeddings. Out-of-vocabulary (OOV) words are initialized randomly with Gaussian samples. CharCNN filter length is 1,3,5, each is 50 dimensions. All vectors including word embedding are updated during training. The cluster number K in discriminative block is 100. The Adam method BIBREF25 is used for optimization. And the first momentum is set to be 0.9 and the second 0.999. The initial learning rate is 0.0004 and the batch size is 32. We will half learning rate when meet a bad iteration, and the patience is 7. Our early stop evaluation is the EM and F1 score of validation set. All hidden states of GRUs, and TreeLSTMs are 500 dimensions, while word-level embedding $d_w$ is 300 dimensions. We set max length of document to 500, and drop the question-document pairs beyond this on training set. Explicit question-type dimension $d_{ET}$ is 50. We apply dropout to the Encoder layer and aggregation layer with a dropout rate of 0.5.
Results
Table 1 shows the official leaderboard on SQuAD test set when we submitted our system. Our model achieves a 68.73% EM score and 77.39% F1 score, which is ranked among the state of the art single models (without model ensembling).
Table 2 shows the ablation performances of various Q-code on the development set. Note that since the testset is hidden from us, we can only perform such an analysis on the development set. Our baseline model using no Q-code achieved a 68.00% and 77.36% EM and F1 scores, respectively. When we added the explicit question type T-code into the baseline model, the performance was improved slightly to 68.16%(EM) and 77.58%(F1). We then used TreeLSTM introduce syntactic parses for question representation and understanding (replacing simple question type as question understanding Q-code), which consistently shows further improvement. We further incorporated the soft adaptation. When letting the number of hidden question types ( $K$ ) to be 20, the performance improves to 68.73%/77.74% on EM and F1, respectively, which corresponds to the results of our model reported in Table 1 . Furthermore, after submitted our result, we have experimented with a large value of $K$ and found that when $K=100$ , we can achieve a better performance of 69.10%/78.38% on the development set.
Figure UID61 shows the EM/F1 scores of different question types while Figure UID62 is the question type amount distribution on the development set. In Figure UID61 we can see that the average EM/F1 of the "when" question is highest and those of the "why" question is the lowest. From Figure UID62 we can see the "what" question is the major class.
Figure 5 shows the composition of F1 score. Take our best model as an example, we observed a 78.38% F1 score on the whole development set, which can be separated into two parts: one is where F1 score equals to 100%, which means an exact match. This part accounts for 69.10% of the entire development set. And the other part accounts for 30.90%, of which the average F1 score is 30.03%. For the latter, we can further divide it into two sub-parts: one is where the F1 score equals to 0%, which means that predict answer is totally wrong. This part occupies 14.89% of the total development set. The other part accounts for 16.01% of the development set, of which average F1 score is 57.96%. From this analysis we can see that reducing the zero F1 score (14.89%) is potentially an important direction to further improve the system.
Conclusions
Closely modelling questions could be of importance for question answering and machine reading. In this paper, we introduce syntactic information to help encode questions in neural networks. We view and model different types of questions and the information shared among them as an adaptation task and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results over a competitive baseline. | 69.10%/78.38% |
6c8bd7fa1cfb1b2bbeb011cc9c712dceac0c8f06 | 6c8bd7fa1cfb1b2bbeb011cc9c712dceac0c8f06_0 | Q: what is the architecture of the baseline model?
Text: Introduction
Enabling computers to understand given documents and answer questions about their content has recently attracted intensive interest, including but not limited to the efforts as in BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Many specific problems such as machine comprehension and question answering often involve modeling such question-document pairs.
The recent availability of relatively large training datasets (see Section "Related Work" for more details) has made it more feasible to train and estimate rather complex models in an end-to-end fashion for these problems, in which a whole model is fit directly with given question-answer tuples and the resulting model has shown to be rather effective.
In this paper, we take a closer look at modeling questions in such an end-to-end neural network framework, since we regard question understanding is of importance for such problems. We first introduced syntactic information to help encode questions. We then viewed and modelled different types of questions and the information shared among them as an adaptation problem and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results on our competitive baselines.
Related Work
Recent advance on reading comprehension and question answering has been closely associated with the availability of various datasets. BIBREF0 released the MCTest data consisting of 500 short, fictional open-domain stories and 2000 questions. The CNN/Daily Mail dataset BIBREF1 contains news articles for close style machine comprehension, in which only entities are removed and tested for comprehension. Children's Book Test (CBT) BIBREF2 leverages named entities, common nouns, verbs, and prepositions to test reading comprehension. The Stanford Question Answering Dataset (SQuAD) BIBREF3 is more recently released dataset, which consists of more than 100,000 questions for documents taken from Wikipedia across a wide range of topics. The question-answer pairs are annotated through crowdsourcing. Answers are spans of text marked in the original documents. In this paper, we use SQuAD to evaluate our models.
Many neural network models have been studied on the SQuAD task. BIBREF6 proposed match LSTM to associate documents and questions and adapted the so-called pointer Network BIBREF7 to determine the positions of the answer text spans. BIBREF8 proposed a dynamic chunk reader to extract and rank a set of answer candidates. BIBREF9 focused on word representation and presented a fine-grained gating mechanism to dynamically combine word-level and character-level representations based on the properties of words. BIBREF10 proposed a multi-perspective context matching (MPCM) model, which matched an encoded document and question from multiple perspectives. BIBREF11 proposed a dynamic decoder and so-called highway maxout network to improve the effectiveness of the decoder. The bi-directional attention flow (BIDAF) BIBREF12 used the bi-directional attention to obtain a question-aware context representation.
In this paper, we introduce syntactic information to encode questions with a specific form of recursive neural networks BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . More specifically, we explore a tree-structured LSTM BIBREF13 , BIBREF14 which extends the linear-chain long short-term memory (LSTM) BIBREF17 to a recursive structure, which has the potential to capture long-distance interactions over the structures.
Different types of questions are often used to seek for different types of information. For example, a "what" question could have very different property from that of a "why" question, while they may share information and need to be trained together instead of separately. We view this as a "adaptation" problem to let different types of questions share a basic model but still discriminate them when needed. Specifically, we are motivated by the ideas "i-vector" BIBREF18 in speech recognition, where neural network based adaptation is performed among different (groups) of speakers and we focused instead on different types of questions here.
The Baseline Model
Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction. Below we discuss these components in more details.
We concatenate embedding at two levels to represent a word: the character composition and word-level embedding. The character composition feeds all characters of a word into a convolutional neural network (CNN) BIBREF19 to obtain a representation for the word. And we use the pre-trained 300-D GloVe vectors BIBREF20 (see the experiment section for details) to initialize our word-level embedding. Each word is therefore represented as the concatenation of the character-composition vector and word-level embedding. This is performed on both questions and documents, resulting in two matrices: the $\mathbf {Q}^e \in \mathbb {R} ^{N\times d_w}$ for a question and the $\mathbf {D}^e \in \mathbb {R} ^{M\times d_w}$ for a document, where $N$ is the question length (number of word tokens), $M$ is the document length, and $d_w$ is the embedding dimensionality.
The above word representation focuses on representing individual words, and an input encoder here employs recurrent neural networks to obtain the representation of a word under its context. We use bi-directional GRU (BiGRU) BIBREF21 for both documents and questions.
$${\mathbf {Q}^c_i}&=\text{BiGRU}(\mathbf {Q}^e_i,i),\forall i \in [1, \dots , N] \\ {\mathbf {D}^c_j}&=\text{BiGRU}(\mathbf {D}^e_j,j),\forall j \in [1, \dots , M]$$ (Eq. 5)
A BiGRU runs a forward and backward GRU on a sequence starting from the left and the right end, respectively. By concatenating the hidden states of these two GRUs for each word, we obtain the a representation for a question or document: $\mathbf {Q}^c \in \mathbb {R} ^{N\times d_c}$ for a question and $\mathbf {D}^c \in \mathbb {R} ^{M\times d_c}$ for a document.
Questions and documents interact closely. As in most previous work, our framework use both soft attention over questions and that over documents to capture the interaction between them. More specifically, in this soft-alignment layer, we first feed the contextual representation matrix $\mathbf {Q}^c$ and $\mathbf {D}^c$ to obtain alignment matrix $\mathbf {U} \in \mathbb {R} ^{N\times M}$ :
$$\mathbf {U}_{ij} =\mathbf {Q}_i^c \cdot \mathbf {D}_j^{c\mathrm {T}}, \forall i \in [1, \dots , N], \forall j \in [1, \dots , M]$$ (Eq. 7)
Each $\mathbf {U}_{ij}$ represents the similarity between a question word $\mathbf {Q}_i^c$ and a document word $\mathbf {D}_j^c$ .
Word-level Q-code Similar as in BIBREF12 , we obtain a word-level Q-code. Specifically, for each document word $w_j$ , we find which words in the question are relevant to it. To this end, $\mathbf {a}_j\in \mathbb {R} ^{N}$ is computed with the following equation and used as a soft attention weight:
$$\mathbf {a}_j = softmax(\mathbf {U}_{:j}), \forall j \in [1, \dots , M]$$ (Eq. 8)
With the attention weights computed, we obtain the encoding of the question for each document word $w_j$ as follows, which we call word-level Q-code in this paper:
$$\mathbf {Q}^w=\mathbf {a}^{\mathrm {T}} \cdot \mathbf {Q}^{c} \in \mathbb {R} ^{M\times d_c}$$ (Eq. 9)
Question-based filtering To better explore question understanding, we design this question-based filtering layer. As detailed later, different question representation can be easily incorporated to this layer in addition to being used as a filter to find key information in the document based on the question. This layer is expandable with more complicated question modeling.
In the basic form of question-based filtering, for each question word $w_i$ , we find which words in the document are associated. Similar to $\mathbf {a}_j$ discussed above, we can obtain the attention weights on document words for each question word $w_i$ :
$$\mathbf {b}_i=softmax(\mathbf {U}_{i:})\in \mathbb {R} ^{M}, \forall i \in [1, \dots , N]$$ (Eq. 10)
By pooling $\mathbf {b}\in \mathbb {R} ^{N\times M}$ , we can obtain a question-based filtering weight $\mathbf {b}^f$ :
$$\mathbf {b}^f=norm(pooling(\mathbf {b})) \in \mathbb {R} ^{M}$$ (Eq. 11)
$$norm(\mathbf {x})=\frac{\mathbf {x}}{\sum _i x_i}$$ (Eq. 12)
where the specific pooling function we used include max-pooling and mean-pooling. Then the document softly filtered based on the corresponding question $\mathbf {D}^f$ can be calculated by:
$$\mathbf {D}_j^{f_{max}}=b^{f_{max}}_j \mathbf {D}_j^{c}, \forall j \in [1, \dots , M]$$ (Eq. 13)
$$\mathbf {D}_j^{f_{mean}}=b^{f_{mean}}_j \mathbf {D}_j^{c}, \forall j \in [1, \dots , M]$$ (Eq. 14)
Through concatenating the document representation $\mathbf {D}^c$ , word-level Q-code $\mathbf {Q}^w$ and question-filtered document $\mathbf {D}^f$ , we can finally obtain the alignment layer representation:
$$\mathbf {I}=[\mathbf {D}^c, \mathbf {Q}^w,\mathbf {D}^c \circ \mathbf {Q}^w,\mathbf {D}^c - \mathbf {Q}^w, \mathbf {D}^f, \mathbf {b}^{f_{max}}, \mathbf {b}^{f_{mean}}] \in \mathbb {R} ^{M \times (6d_c+2)}$$ (Eq. 16)
where " $\circ $ " stands for element-wise multiplication and " $-$ " is simply the vector subtraction.
After acquiring the local alignment representation, key information in document and question has been collected, and the aggregation layer is then performed to find answers. We use three BiGRU layers to model the process that aggregates local information to make the global decision to find the answer spans. We found a residual architecture BIBREF22 as described in Figure 2 is very effective in this aggregation process:
$$\mathbf {I}^1_i=\text{BiGRU}(\mathbf {I}_i)$$ (Eq. 18)
$$\mathbf {I}^2_i=\mathbf {I}^1_i + \text{BiGRU}(\mathbf {I}^1_i)$$ (Eq. 19)
The SQuAD QA task requires a span of text to answer a question. We use a pointer network BIBREF7 to predict the starting and end position of answers as in BIBREF6 . Different from their methods, we use a two-directional prediction to obtain the positions. For one direction, we first predict the starting position of the answer span followed by predicting the end position, which is implemented with the following equations:
$$P(s+)=softmax(W_{s+}\cdot I^3)$$ (Eq. 23)
$$P(e+)=softmax(W_{e+} \cdot I^3 + W_{h+} \cdot h_{s+})$$ (Eq. 24)
where $\mathbf {I}^3$ is inference layer output, $\mathbf {h}_{s+}$ is the hidden state of the first step, and all $\mathbf {W}$ are trainable matrices. We also perform this by predicting the end position first and then the starting position:
$$P(e-)=softmax(W_{e-}\cdot I^3)$$ (Eq. 25)
$$P(s-)=softmax(W_{s-} \cdot I^3 + W_{h-} \cdot h_{e-})$$ (Eq. 26)
We finally identify the span of an answer with the following equation:
$$P(s)=pooling([P(s+), P(s-)])$$ (Eq. 27)
$$P(e)=pooling([P(e+), P(e-)])$$ (Eq. 28)
We use the mean-pooling here as it is more effective on the development set than the alternatives such as the max-pooling.
Question Understanding and Adaptation
The interplay of syntax and semantics of natural language questions is of interest for question representation. We attempt to incorporate syntactic information in questions representation with TreeLSTM BIBREF13 , BIBREF14 . In general a TreeLSTM could perform semantic composition over given syntactic structures.
Unlike the chain-structured LSTM BIBREF17 , the TreeLSTM captures long-distance interaction on a tree. The update of a TreeLSTM node is described at a high level with Equation ( 31 ), and the detailed computation is described in (–). Specifically, the input of a TreeLSTM node is used to configure four gates: the input gate $\mathbf {i}_t$ , output gate $\mathbf {o}_t$ , and the two forget gates $\mathbf {f}_t^L$ for the left child input and $\mathbf {f}_t^R$ for the right. The memory cell $\mathbf {c}_t$ considers each child's cell vector, $\mathbf {c}_{t-1}^L$ and $\mathbf {c}_{t-1}^R$ , which are gated by the left forget gate $\mathbf {f}_t^L$ and right forget gate $\mathbf {f}_t^R$ , respectively.
$$\mathbf {h}_t &= \text{TreeLSTM}(\mathbf {x}_t, \mathbf {h}_{t-1}^L, \mathbf {h}_{t-1}^R), \\ \mathbf {h}_t &= \mathbf {o}_t \circ \tanh (\mathbf {c}_{t}),\\ \mathbf {o}_t &= \sigma (\mathbf {W}_o \mathbf {x}_t + \mathbf {U}_o^L \mathbf {h}_{t-1}^L + \mathbf {U}_o^R \mathbf {h}_{t-1}^R), \\\mathbf {c}_t &= \mathbf {f}_t^L \circ \mathbf {c}_{t-1}^L + \mathbf {f}_t^R \circ \mathbf {c}_{t-1}^R + \mathbf {i}_t \circ \mathbf {u}_t, \\\mathbf {f}_t^L &= \sigma (\mathbf {W}_f \mathbf {x}_t + \mathbf {U}_f^{LL} \mathbf {h}_{t-1}^L + \mathbf {U}_f^{LR} \mathbf {h}_{t-1}^R),\\ \mathbf {f}_t^R &= \sigma (\mathbf {W}_f \mathbf {x}_t + \mathbf {U}_f^{RL} \mathbf {h}_{t-1}^L + \mathbf {U}_f^{RR} \mathbf {h}_{t-1}^R), \\\mathbf {i}_t &= \sigma (\mathbf {W}_i \mathbf {x}_t + \mathbf {U}_i^L \mathbf {h}_{t-1}^L + \mathbf {U}_i^R \mathbf {h}_{t-1}^R), \\\mathbf {u}_t &= \tanh (\mathbf {W}_c \mathbf {x}_t + \mathbf {U}_c^L \mathbf {h}_{t-1}^L + \mathbf {U}_c^R \mathbf {h}_{t-1}^R),$$ (Eq. 31)
where $\sigma $ is the sigmoid function, $\circ $ is the element-wise multiplication of two vectors, and all $\mathbf {W}$ , $\mathbf {U}$ are trainable matrices.
To obtain the parse tree information, we use Stanford CoreNLP (PCFG Parser) BIBREF23 , BIBREF24 to produce a binarized constituency parse for each question and build the TreeLSTM based on the parse tree. The root node of TreeLSTM is used as the representation for the whole question. More specifically, we use it as TreeLSTM Q-code $\mathbf {Q}^{TL}\in \mathbb {R} ^{d_c}$ , by not only simply concatenating it to the alignment layer output but also using it as a question filter, just as we discussed in the question-based filtering section:
$$\mathbf {Q}^{TL}=\text{TreeLSTM}(\mathbf {Q}^e) \in \mathbb {R} ^{d_c}$$ (Eq. 32)
$$\mathbf {b}^{TL}=norm(\mathbf {Q}^{TL} \cdot \mathbf {D}^{c\mathrm {T}}) \in \mathbb {R} ^{M}$$ (Eq. 33)
where $\mathbf {I}_{new}$ is the new output of alignment layer, and function $repmat$ copies $\mathbf {Q}^{TL}$ for M times to fit with $\mathbf {I}$ .
Questions by nature are often composed to fulfill different types of information needs. For example, a "when" question seeks for different types of information (i.e., temporal information) than those for a "why" question. Different types of questions and the corresponding answers could potentially have different distributional regularity.
The previous models are often trained for all questions without explicitly discriminating different question types; however, for a target question, both the common features shared by all questions and the specific features for a specific type of question are further considered in this paper, as they could potentially obey different distributions. In this paper we further explicitly model different types of questions in the end-to-end training. We start from a simple way to first analyze the word frequency of all questions, and obtain top-10 most frequent question types: what, how, who, when, which, where, why, be, whose, and whom, in which be stands for the questions beginning with different forms of the word be such as is, am, and are. We explicitly encode question-type information to be an 11-dimensional one-hot vector (the top-10 question types and "other" question type). Each question type is with a trainable embedding vector. We call this explicit question type code, $\mathbf {ET}\in \mathbb {R} ^{d_{ET}}$ . Then the vector for each question type is tuned during training, and is added to the system with the following equation:
$$\mathbf {I}_{new}=[\mathbf {I}, repmat(\mathbf {ET})]$$ (Eq. 38)
As discussed, different types of questions and their answers may share common regularity and have separate property at the same time. We also view this as an adaptation problem in order to let different types of questions share a basic model but still discriminate them when needed. Specifically, we borrow ideas from speaker adaptation BIBREF18 in speech recognition, where neural-network-based adaptation is performed among different groups of speakers.
Conceptually we regard a type of questions as a group of acoustically similar speakers. Specifically we propose a question discriminative block or simply called a discriminative block (Figure 3 ) below to perform question adaptation. The main idea is described below:
$$\mathbf {x^\prime } = f([\mathbf {x}, \mathbf {\bar{x}}^c, \mathbf {\delta _x}])$$ (Eq. 40)
For each input question $\mathbf {x}$ , we can decompose it to two parts: the cluster it belong(i.e., question type) and the diverse in the cluster. The information of the cluster is encoded in a vector $\mathbf {\bar{x}}^c$ . In order to keep calculation differentiable, we compute the weight of all the clusters based on the distances of $\mathbf {x}$ and each cluster center vector, in stead of just choosing the closest cluster. Then the discriminative vector $\mathbf {\delta _x}$ with regard to these most relevant clusters are computed. All this information is combined to obtain the discriminative information. In order to keep the full information of input, we also copy the input question $\mathbf {x}$ , together with the acquired discriminative information, to a feed-forward layer to obtain a new representation $\mathbf {x^\prime }$ for the question.
More specifically, the adaptation algorithm contains two steps: adapting and updating, which is detailed as follows:
Adapting In the adapting step, we first compute the similarity score between an input question vector $\mathbf {x}\in \mathbb {R} ^{h}$ and each centroid vector of $K$ clusters $~\mathbf {\bar{x}}\in \mathbb {R} ^{K \times h}$ . Each cluster here models a question type. Unlike the explicit question type modeling discussed above, here we do not specify what question types we are modeling but let the system to learn. Specifically, we only need to pre-specific how many clusters, $K$ , we are modeling. The similarity between an input question and cluster centroid can be used to compute similarity weight $\mathbf {w}^a$ :
$$w_k^a = softmax(cos\_sim(\mathbf {x}, \mathbf {\bar{x}}_k), \alpha ), \forall k \in [1, \dots , K]$$ (Eq. 43)
$$cos\_sim(\mathbf {u}, \mathbf {v}) = \frac{<\mathbf {u},\mathbf {v}>}{||\mathbf {u}|| \cdot ||\mathbf {v}||}$$ (Eq. 44)
We set $\alpha $ equals 50 to make sure only closest class will have a high weight while maintain differentiable. Then we acquire a soft class-center vector $\mathbf {\bar{x}}^c$ :
$$\mathbf {\bar{x}}^c = \sum _k w^a_k \mathbf {\bar{x}}_k \in \mathbb {R} ^{h}$$ (Eq. 46)
We then compute a discriminative vector $\mathbf {\delta _x}$ between the input question with regard to the soft class-center vector:
$$\mathbf {\delta _x} = \mathbf {x} - \mathbf {\bar{x}}^c$$ (Eq. 47)
Note that $\bar{\mathbf {x}}^c$ here models the cluster information and $\mathbf {\delta _x}$ represents the discriminative information in the cluster. By feeding $\mathbf {x}$ , $\bar{\mathbf {x}}^c$ and $\mathbf {\delta _x}$ into feedforward layer with Relu, we obtain $\mathbf {x^{\prime }}\in \mathbb {R} ^{K}$ :
$$\mathbf {x^{\prime }} = Relu(\mathbf {W} \cdot [\mathbf {x},\bar{\mathbf {x}}^c,\mathbf {\delta _x}])$$ (Eq. 48)
With $\mathbf {x^{\prime }}$ ready, we can apply Discriminative Block to any question code and obtain its adaptation Q-code. In this paper, we use TreeLSTM Q-code as the input vector $\mathbf {x}$ , and obtain TreeLSTM adaptation Q-code $\mathbf {Q}^{TLa}\in \mathbb {R} ^{d_c}$ . Similar to TreeLSTM Q-code $\mathbf {Q}^{TL}$ , we concatenate $\mathbf {Q}^{TLa}$ to alignment output $\mathbf {I}$ and also use it as a question filter:
$$\mathbf {Q}^{TLa} = Relu(\mathbf {W} \cdot [\mathbf {Q}^{TL},\overline{\mathbf {Q}^{TL}}^c,\mathbf {\delta _{\mathbf {Q}^{TL}}}])$$ (Eq. 49)
$$\mathbf {b}^{TLa}=norm(\mathbf {Q}^{TLa} \cdot \mathbf {D}^{c\mathrm {T}}) \in \mathbb {R} ^{M}$$ (Eq. 50)
Updating The updating stage attempts to modify the center vectors of the $K$ clusters in order to fit each cluster to model different types of questions. The updating is performed according to the following formula:
$$\mathbf {\bar{x}^{\prime }}_k = (1-\beta \text{w}_k^a)\mathbf {\bar{x}}_k+\beta \text{w}_k^a\mathbf {x}, \forall k \in [1, \dots , K]$$ (Eq. 54)
In the equation, $\beta $ is an updating rate used to control the amount of each updating, and we set it to 0.01. When $\mathbf {x}$ is far away from $K$ -th cluster center $\mathbf {\bar{x}}_k$ , $\text{w}_k^a$ is close to be value 0 and the $k$ -th cluster center $\mathbf {\bar{x}}_k$ tends not to be updated. If $\mathbf {x}$ is instead close to the $j$ -th cluster center $\mathbf {\bar{x}}_j$ , $\mathbf {x}$0 is close to the value 1 and the centroid of the $\mathbf {x}$1 -th cluster $\mathbf {x}$2 will be updated more aggressively using $\mathbf {x}$3 .
Set-Up
We test our models on Stanford Question Answering Dataset (SQuAD) BIBREF3 . The SQuAD dataset consists of more than 100,000 questions annotated by crowdsourcing workers on a selected set of Wikipedia articles, and the answer to each question is a span of text in the Wikipedia articles. Training data includes 87,599 instances and validation set has 10,570 instances. The test data is hidden and kept by the organizer. The evaluation of SQuAD is Exact Match (EM) and F1 score.
We use pre-trained 300-D Glove 840B vectors BIBREF20 to initialize our word embeddings. Out-of-vocabulary (OOV) words are initialized randomly with Gaussian samples. CharCNN filter length is 1,3,5, each is 50 dimensions. All vectors including word embedding are updated during training. The cluster number K in discriminative block is 100. The Adam method BIBREF25 is used for optimization. And the first momentum is set to be 0.9 and the second 0.999. The initial learning rate is 0.0004 and the batch size is 32. We will half learning rate when meet a bad iteration, and the patience is 7. Our early stop evaluation is the EM and F1 score of validation set. All hidden states of GRUs, and TreeLSTMs are 500 dimensions, while word-level embedding $d_w$ is 300 dimensions. We set max length of document to 500, and drop the question-document pairs beyond this on training set. Explicit question-type dimension $d_{ET}$ is 50. We apply dropout to the Encoder layer and aggregation layer with a dropout rate of 0.5.
Results
Table 1 shows the official leaderboard on SQuAD test set when we submitted our system. Our model achieves a 68.73% EM score and 77.39% F1 score, which is ranked among the state of the art single models (without model ensembling).
Table 2 shows the ablation performances of various Q-code on the development set. Note that since the testset is hidden from us, we can only perform such an analysis on the development set. Our baseline model using no Q-code achieved a 68.00% and 77.36% EM and F1 scores, respectively. When we added the explicit question type T-code into the baseline model, the performance was improved slightly to 68.16%(EM) and 77.58%(F1). We then used TreeLSTM introduce syntactic parses for question representation and understanding (replacing simple question type as question understanding Q-code), which consistently shows further improvement. We further incorporated the soft adaptation. When letting the number of hidden question types ( $K$ ) to be 20, the performance improves to 68.73%/77.74% on EM and F1, respectively, which corresponds to the results of our model reported in Table 1 . Furthermore, after submitted our result, we have experimented with a large value of $K$ and found that when $K=100$ , we can achieve a better performance of 69.10%/78.38% on the development set.
Figure UID61 shows the EM/F1 scores of different question types while Figure UID62 is the question type amount distribution on the development set. In Figure UID61 we can see that the average EM/F1 of the "when" question is highest and those of the "why" question is the lowest. From Figure UID62 we can see the "what" question is the major class.
Figure 5 shows the composition of F1 score. Take our best model as an example, we observed a 78.38% F1 score on the whole development set, which can be separated into two parts: one is where F1 score equals to 100%, which means an exact match. This part accounts for 69.10% of the entire development set. And the other part accounts for 30.90%, of which the average F1 score is 30.03%. For the latter, we can further divide it into two sub-parts: one is where the F1 score equals to 0%, which means that predict answer is totally wrong. This part occupies 14.89% of the total development set. The other part accounts for 16.01% of the development set, of which average F1 score is 57.96%. From this analysis we can see that reducing the zero F1 score (14.89%) is potentially an important direction to further improve the system.
Conclusions
Closely modelling questions could be of importance for question answering and machine reading. In this paper, we introduce syntactic information to help encode questions in neural networks. We view and model different types of questions and the information shared among them as an adaptation task and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results over a competitive baseline. | word embedding, input encoder, alignment, aggregation, and prediction. |
6c8bd7fa1cfb1b2bbeb011cc9c712dceac0c8f06 | 6c8bd7fa1cfb1b2bbeb011cc9c712dceac0c8f06_1 | Q: what is the architecture of the baseline model?
Text: Introduction
Enabling computers to understand given documents and answer questions about their content has recently attracted intensive interest, including but not limited to the efforts as in BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Many specific problems such as machine comprehension and question answering often involve modeling such question-document pairs.
The recent availability of relatively large training datasets (see Section "Related Work" for more details) has made it more feasible to train and estimate rather complex models in an end-to-end fashion for these problems, in which a whole model is fit directly with given question-answer tuples and the resulting model has shown to be rather effective.
In this paper, we take a closer look at modeling questions in such an end-to-end neural network framework, since we regard question understanding is of importance for such problems. We first introduced syntactic information to help encode questions. We then viewed and modelled different types of questions and the information shared among them as an adaptation problem and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results on our competitive baselines.
Related Work
Recent advance on reading comprehension and question answering has been closely associated with the availability of various datasets. BIBREF0 released the MCTest data consisting of 500 short, fictional open-domain stories and 2000 questions. The CNN/Daily Mail dataset BIBREF1 contains news articles for close style machine comprehension, in which only entities are removed and tested for comprehension. Children's Book Test (CBT) BIBREF2 leverages named entities, common nouns, verbs, and prepositions to test reading comprehension. The Stanford Question Answering Dataset (SQuAD) BIBREF3 is more recently released dataset, which consists of more than 100,000 questions for documents taken from Wikipedia across a wide range of topics. The question-answer pairs are annotated through crowdsourcing. Answers are spans of text marked in the original documents. In this paper, we use SQuAD to evaluate our models.
Many neural network models have been studied on the SQuAD task. BIBREF6 proposed match LSTM to associate documents and questions and adapted the so-called pointer Network BIBREF7 to determine the positions of the answer text spans. BIBREF8 proposed a dynamic chunk reader to extract and rank a set of answer candidates. BIBREF9 focused on word representation and presented a fine-grained gating mechanism to dynamically combine word-level and character-level representations based on the properties of words. BIBREF10 proposed a multi-perspective context matching (MPCM) model, which matched an encoded document and question from multiple perspectives. BIBREF11 proposed a dynamic decoder and so-called highway maxout network to improve the effectiveness of the decoder. The bi-directional attention flow (BIDAF) BIBREF12 used the bi-directional attention to obtain a question-aware context representation.
In this paper, we introduce syntactic information to encode questions with a specific form of recursive neural networks BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . More specifically, we explore a tree-structured LSTM BIBREF13 , BIBREF14 which extends the linear-chain long short-term memory (LSTM) BIBREF17 to a recursive structure, which has the potential to capture long-distance interactions over the structures.
Different types of questions are often used to seek for different types of information. For example, a "what" question could have very different property from that of a "why" question, while they may share information and need to be trained together instead of separately. We view this as a "adaptation" problem to let different types of questions share a basic model but still discriminate them when needed. Specifically, we are motivated by the ideas "i-vector" BIBREF18 in speech recognition, where neural network based adaptation is performed among different (groups) of speakers and we focused instead on different types of questions here.
The Baseline Model
Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction. Below we discuss these components in more details.
We concatenate embedding at two levels to represent a word: the character composition and word-level embedding. The character composition feeds all characters of a word into a convolutional neural network (CNN) BIBREF19 to obtain a representation for the word. And we use the pre-trained 300-D GloVe vectors BIBREF20 (see the experiment section for details) to initialize our word-level embedding. Each word is therefore represented as the concatenation of the character-composition vector and word-level embedding. This is performed on both questions and documents, resulting in two matrices: the $\mathbf {Q}^e \in \mathbb {R} ^{N\times d_w}$ for a question and the $\mathbf {D}^e \in \mathbb {R} ^{M\times d_w}$ for a document, where $N$ is the question length (number of word tokens), $M$ is the document length, and $d_w$ is the embedding dimensionality.
The above word representation focuses on representing individual words, and an input encoder here employs recurrent neural networks to obtain the representation of a word under its context. We use bi-directional GRU (BiGRU) BIBREF21 for both documents and questions.
$${\mathbf {Q}^c_i}&=\text{BiGRU}(\mathbf {Q}^e_i,i),\forall i \in [1, \dots , N] \\ {\mathbf {D}^c_j}&=\text{BiGRU}(\mathbf {D}^e_j,j),\forall j \in [1, \dots , M]$$ (Eq. 5)
A BiGRU runs a forward and backward GRU on a sequence starting from the left and the right end, respectively. By concatenating the hidden states of these two GRUs for each word, we obtain the a representation for a question or document: $\mathbf {Q}^c \in \mathbb {R} ^{N\times d_c}$ for a question and $\mathbf {D}^c \in \mathbb {R} ^{M\times d_c}$ for a document.
Questions and documents interact closely. As in most previous work, our framework use both soft attention over questions and that over documents to capture the interaction between them. More specifically, in this soft-alignment layer, we first feed the contextual representation matrix $\mathbf {Q}^c$ and $\mathbf {D}^c$ to obtain alignment matrix $\mathbf {U} \in \mathbb {R} ^{N\times M}$ :
$$\mathbf {U}_{ij} =\mathbf {Q}_i^c \cdot \mathbf {D}_j^{c\mathrm {T}}, \forall i \in [1, \dots , N], \forall j \in [1, \dots , M]$$ (Eq. 7)
Each $\mathbf {U}_{ij}$ represents the similarity between a question word $\mathbf {Q}_i^c$ and a document word $\mathbf {D}_j^c$ .
Word-level Q-code Similar as in BIBREF12 , we obtain a word-level Q-code. Specifically, for each document word $w_j$ , we find which words in the question are relevant to it. To this end, $\mathbf {a}_j\in \mathbb {R} ^{N}$ is computed with the following equation and used as a soft attention weight:
$$\mathbf {a}_j = softmax(\mathbf {U}_{:j}), \forall j \in [1, \dots , M]$$ (Eq. 8)
With the attention weights computed, we obtain the encoding of the question for each document word $w_j$ as follows, which we call word-level Q-code in this paper:
$$\mathbf {Q}^w=\mathbf {a}^{\mathrm {T}} \cdot \mathbf {Q}^{c} \in \mathbb {R} ^{M\times d_c}$$ (Eq. 9)
Question-based filtering To better explore question understanding, we design this question-based filtering layer. As detailed later, different question representation can be easily incorporated to this layer in addition to being used as a filter to find key information in the document based on the question. This layer is expandable with more complicated question modeling.
In the basic form of question-based filtering, for each question word $w_i$ , we find which words in the document are associated. Similar to $\mathbf {a}_j$ discussed above, we can obtain the attention weights on document words for each question word $w_i$ :
$$\mathbf {b}_i=softmax(\mathbf {U}_{i:})\in \mathbb {R} ^{M}, \forall i \in [1, \dots , N]$$ (Eq. 10)
By pooling $\mathbf {b}\in \mathbb {R} ^{N\times M}$ , we can obtain a question-based filtering weight $\mathbf {b}^f$ :
$$\mathbf {b}^f=norm(pooling(\mathbf {b})) \in \mathbb {R} ^{M}$$ (Eq. 11)
$$norm(\mathbf {x})=\frac{\mathbf {x}}{\sum _i x_i}$$ (Eq. 12)
where the specific pooling function we used include max-pooling and mean-pooling. Then the document softly filtered based on the corresponding question $\mathbf {D}^f$ can be calculated by:
$$\mathbf {D}_j^{f_{max}}=b^{f_{max}}_j \mathbf {D}_j^{c}, \forall j \in [1, \dots , M]$$ (Eq. 13)
$$\mathbf {D}_j^{f_{mean}}=b^{f_{mean}}_j \mathbf {D}_j^{c}, \forall j \in [1, \dots , M]$$ (Eq. 14)
Through concatenating the document representation $\mathbf {D}^c$ , word-level Q-code $\mathbf {Q}^w$ and question-filtered document $\mathbf {D}^f$ , we can finally obtain the alignment layer representation:
$$\mathbf {I}=[\mathbf {D}^c, \mathbf {Q}^w,\mathbf {D}^c \circ \mathbf {Q}^w,\mathbf {D}^c - \mathbf {Q}^w, \mathbf {D}^f, \mathbf {b}^{f_{max}}, \mathbf {b}^{f_{mean}}] \in \mathbb {R} ^{M \times (6d_c+2)}$$ (Eq. 16)
where " $\circ $ " stands for element-wise multiplication and " $-$ " is simply the vector subtraction.
After acquiring the local alignment representation, key information in document and question has been collected, and the aggregation layer is then performed to find answers. We use three BiGRU layers to model the process that aggregates local information to make the global decision to find the answer spans. We found a residual architecture BIBREF22 as described in Figure 2 is very effective in this aggregation process:
$$\mathbf {I}^1_i=\text{BiGRU}(\mathbf {I}_i)$$ (Eq. 18)
$$\mathbf {I}^2_i=\mathbf {I}^1_i + \text{BiGRU}(\mathbf {I}^1_i)$$ (Eq. 19)
The SQuAD QA task requires a span of text to answer a question. We use a pointer network BIBREF7 to predict the starting and end position of answers as in BIBREF6 . Different from their methods, we use a two-directional prediction to obtain the positions. For one direction, we first predict the starting position of the answer span followed by predicting the end position, which is implemented with the following equations:
$$P(s+)=softmax(W_{s+}\cdot I^3)$$ (Eq. 23)
$$P(e+)=softmax(W_{e+} \cdot I^3 + W_{h+} \cdot h_{s+})$$ (Eq. 24)
where $\mathbf {I}^3$ is inference layer output, $\mathbf {h}_{s+}$ is the hidden state of the first step, and all $\mathbf {W}$ are trainable matrices. We also perform this by predicting the end position first and then the starting position:
$$P(e-)=softmax(W_{e-}\cdot I^3)$$ (Eq. 25)
$$P(s-)=softmax(W_{s-} \cdot I^3 + W_{h-} \cdot h_{e-})$$ (Eq. 26)
We finally identify the span of an answer with the following equation:
$$P(s)=pooling([P(s+), P(s-)])$$ (Eq. 27)
$$P(e)=pooling([P(e+), P(e-)])$$ (Eq. 28)
We use the mean-pooling here as it is more effective on the development set than the alternatives such as the max-pooling.
Question Understanding and Adaptation
The interplay of syntax and semantics of natural language questions is of interest for question representation. We attempt to incorporate syntactic information in questions representation with TreeLSTM BIBREF13 , BIBREF14 . In general a TreeLSTM could perform semantic composition over given syntactic structures.
Unlike the chain-structured LSTM BIBREF17 , the TreeLSTM captures long-distance interaction on a tree. The update of a TreeLSTM node is described at a high level with Equation ( 31 ), and the detailed computation is described in (–). Specifically, the input of a TreeLSTM node is used to configure four gates: the input gate $\mathbf {i}_t$ , output gate $\mathbf {o}_t$ , and the two forget gates $\mathbf {f}_t^L$ for the left child input and $\mathbf {f}_t^R$ for the right. The memory cell $\mathbf {c}_t$ considers each child's cell vector, $\mathbf {c}_{t-1}^L$ and $\mathbf {c}_{t-1}^R$ , which are gated by the left forget gate $\mathbf {f}_t^L$ and right forget gate $\mathbf {f}_t^R$ , respectively.
$$\mathbf {h}_t &= \text{TreeLSTM}(\mathbf {x}_t, \mathbf {h}_{t-1}^L, \mathbf {h}_{t-1}^R), \\ \mathbf {h}_t &= \mathbf {o}_t \circ \tanh (\mathbf {c}_{t}),\\ \mathbf {o}_t &= \sigma (\mathbf {W}_o \mathbf {x}_t + \mathbf {U}_o^L \mathbf {h}_{t-1}^L + \mathbf {U}_o^R \mathbf {h}_{t-1}^R), \\\mathbf {c}_t &= \mathbf {f}_t^L \circ \mathbf {c}_{t-1}^L + \mathbf {f}_t^R \circ \mathbf {c}_{t-1}^R + \mathbf {i}_t \circ \mathbf {u}_t, \\\mathbf {f}_t^L &= \sigma (\mathbf {W}_f \mathbf {x}_t + \mathbf {U}_f^{LL} \mathbf {h}_{t-1}^L + \mathbf {U}_f^{LR} \mathbf {h}_{t-1}^R),\\ \mathbf {f}_t^R &= \sigma (\mathbf {W}_f \mathbf {x}_t + \mathbf {U}_f^{RL} \mathbf {h}_{t-1}^L + \mathbf {U}_f^{RR} \mathbf {h}_{t-1}^R), \\\mathbf {i}_t &= \sigma (\mathbf {W}_i \mathbf {x}_t + \mathbf {U}_i^L \mathbf {h}_{t-1}^L + \mathbf {U}_i^R \mathbf {h}_{t-1}^R), \\\mathbf {u}_t &= \tanh (\mathbf {W}_c \mathbf {x}_t + \mathbf {U}_c^L \mathbf {h}_{t-1}^L + \mathbf {U}_c^R \mathbf {h}_{t-1}^R),$$ (Eq. 31)
where $\sigma $ is the sigmoid function, $\circ $ is the element-wise multiplication of two vectors, and all $\mathbf {W}$ , $\mathbf {U}$ are trainable matrices.
To obtain the parse tree information, we use Stanford CoreNLP (PCFG Parser) BIBREF23 , BIBREF24 to produce a binarized constituency parse for each question and build the TreeLSTM based on the parse tree. The root node of TreeLSTM is used as the representation for the whole question. More specifically, we use it as TreeLSTM Q-code $\mathbf {Q}^{TL}\in \mathbb {R} ^{d_c}$ , by not only simply concatenating it to the alignment layer output but also using it as a question filter, just as we discussed in the question-based filtering section:
$$\mathbf {Q}^{TL}=\text{TreeLSTM}(\mathbf {Q}^e) \in \mathbb {R} ^{d_c}$$ (Eq. 32)
$$\mathbf {b}^{TL}=norm(\mathbf {Q}^{TL} \cdot \mathbf {D}^{c\mathrm {T}}) \in \mathbb {R} ^{M}$$ (Eq. 33)
where $\mathbf {I}_{new}$ is the new output of alignment layer, and function $repmat$ copies $\mathbf {Q}^{TL}$ for M times to fit with $\mathbf {I}$ .
Questions by nature are often composed to fulfill different types of information needs. For example, a "when" question seeks for different types of information (i.e., temporal information) than those for a "why" question. Different types of questions and the corresponding answers could potentially have different distributional regularity.
The previous models are often trained for all questions without explicitly discriminating different question types; however, for a target question, both the common features shared by all questions and the specific features for a specific type of question are further considered in this paper, as they could potentially obey different distributions. In this paper we further explicitly model different types of questions in the end-to-end training. We start from a simple way to first analyze the word frequency of all questions, and obtain top-10 most frequent question types: what, how, who, when, which, where, why, be, whose, and whom, in which be stands for the questions beginning with different forms of the word be such as is, am, and are. We explicitly encode question-type information to be an 11-dimensional one-hot vector (the top-10 question types and "other" question type). Each question type is with a trainable embedding vector. We call this explicit question type code, $\mathbf {ET}\in \mathbb {R} ^{d_{ET}}$ . Then the vector for each question type is tuned during training, and is added to the system with the following equation:
$$\mathbf {I}_{new}=[\mathbf {I}, repmat(\mathbf {ET})]$$ (Eq. 38)
As discussed, different types of questions and their answers may share common regularity and have separate property at the same time. We also view this as an adaptation problem in order to let different types of questions share a basic model but still discriminate them when needed. Specifically, we borrow ideas from speaker adaptation BIBREF18 in speech recognition, where neural-network-based adaptation is performed among different groups of speakers.
Conceptually we regard a type of questions as a group of acoustically similar speakers. Specifically we propose a question discriminative block or simply called a discriminative block (Figure 3 ) below to perform question adaptation. The main idea is described below:
$$\mathbf {x^\prime } = f([\mathbf {x}, \mathbf {\bar{x}}^c, \mathbf {\delta _x}])$$ (Eq. 40)
For each input question $\mathbf {x}$ , we can decompose it to two parts: the cluster it belong(i.e., question type) and the diverse in the cluster. The information of the cluster is encoded in a vector $\mathbf {\bar{x}}^c$ . In order to keep calculation differentiable, we compute the weight of all the clusters based on the distances of $\mathbf {x}$ and each cluster center vector, in stead of just choosing the closest cluster. Then the discriminative vector $\mathbf {\delta _x}$ with regard to these most relevant clusters are computed. All this information is combined to obtain the discriminative information. In order to keep the full information of input, we also copy the input question $\mathbf {x}$ , together with the acquired discriminative information, to a feed-forward layer to obtain a new representation $\mathbf {x^\prime }$ for the question.
More specifically, the adaptation algorithm contains two steps: adapting and updating, which is detailed as follows:
Adapting In the adapting step, we first compute the similarity score between an input question vector $\mathbf {x}\in \mathbb {R} ^{h}$ and each centroid vector of $K$ clusters $~\mathbf {\bar{x}}\in \mathbb {R} ^{K \times h}$ . Each cluster here models a question type. Unlike the explicit question type modeling discussed above, here we do not specify what question types we are modeling but let the system to learn. Specifically, we only need to pre-specific how many clusters, $K$ , we are modeling. The similarity between an input question and cluster centroid can be used to compute similarity weight $\mathbf {w}^a$ :
$$w_k^a = softmax(cos\_sim(\mathbf {x}, \mathbf {\bar{x}}_k), \alpha ), \forall k \in [1, \dots , K]$$ (Eq. 43)
$$cos\_sim(\mathbf {u}, \mathbf {v}) = \frac{<\mathbf {u},\mathbf {v}>}{||\mathbf {u}|| \cdot ||\mathbf {v}||}$$ (Eq. 44)
We set $\alpha $ equals 50 to make sure only closest class will have a high weight while maintain differentiable. Then we acquire a soft class-center vector $\mathbf {\bar{x}}^c$ :
$$\mathbf {\bar{x}}^c = \sum _k w^a_k \mathbf {\bar{x}}_k \in \mathbb {R} ^{h}$$ (Eq. 46)
We then compute a discriminative vector $\mathbf {\delta _x}$ between the input question with regard to the soft class-center vector:
$$\mathbf {\delta _x} = \mathbf {x} - \mathbf {\bar{x}}^c$$ (Eq. 47)
Note that $\bar{\mathbf {x}}^c$ here models the cluster information and $\mathbf {\delta _x}$ represents the discriminative information in the cluster. By feeding $\mathbf {x}$ , $\bar{\mathbf {x}}^c$ and $\mathbf {\delta _x}$ into feedforward layer with Relu, we obtain $\mathbf {x^{\prime }}\in \mathbb {R} ^{K}$ :
$$\mathbf {x^{\prime }} = Relu(\mathbf {W} \cdot [\mathbf {x},\bar{\mathbf {x}}^c,\mathbf {\delta _x}])$$ (Eq. 48)
With $\mathbf {x^{\prime }}$ ready, we can apply Discriminative Block to any question code and obtain its adaptation Q-code. In this paper, we use TreeLSTM Q-code as the input vector $\mathbf {x}$ , and obtain TreeLSTM adaptation Q-code $\mathbf {Q}^{TLa}\in \mathbb {R} ^{d_c}$ . Similar to TreeLSTM Q-code $\mathbf {Q}^{TL}$ , we concatenate $\mathbf {Q}^{TLa}$ to alignment output $\mathbf {I}$ and also use it as a question filter:
$$\mathbf {Q}^{TLa} = Relu(\mathbf {W} \cdot [\mathbf {Q}^{TL},\overline{\mathbf {Q}^{TL}}^c,\mathbf {\delta _{\mathbf {Q}^{TL}}}])$$ (Eq. 49)
$$\mathbf {b}^{TLa}=norm(\mathbf {Q}^{TLa} \cdot \mathbf {D}^{c\mathrm {T}}) \in \mathbb {R} ^{M}$$ (Eq. 50)
Updating The updating stage attempts to modify the center vectors of the $K$ clusters in order to fit each cluster to model different types of questions. The updating is performed according to the following formula:
$$\mathbf {\bar{x}^{\prime }}_k = (1-\beta \text{w}_k^a)\mathbf {\bar{x}}_k+\beta \text{w}_k^a\mathbf {x}, \forall k \in [1, \dots , K]$$ (Eq. 54)
In the equation, $\beta $ is an updating rate used to control the amount of each updating, and we set it to 0.01. When $\mathbf {x}$ is far away from $K$ -th cluster center $\mathbf {\bar{x}}_k$ , $\text{w}_k^a$ is close to be value 0 and the $k$ -th cluster center $\mathbf {\bar{x}}_k$ tends not to be updated. If $\mathbf {x}$ is instead close to the $j$ -th cluster center $\mathbf {\bar{x}}_j$ , $\mathbf {x}$0 is close to the value 1 and the centroid of the $\mathbf {x}$1 -th cluster $\mathbf {x}$2 will be updated more aggressively using $\mathbf {x}$3 .
Set-Up
We test our models on Stanford Question Answering Dataset (SQuAD) BIBREF3 . The SQuAD dataset consists of more than 100,000 questions annotated by crowdsourcing workers on a selected set of Wikipedia articles, and the answer to each question is a span of text in the Wikipedia articles. Training data includes 87,599 instances and validation set has 10,570 instances. The test data is hidden and kept by the organizer. The evaluation of SQuAD is Exact Match (EM) and F1 score.
We use pre-trained 300-D Glove 840B vectors BIBREF20 to initialize our word embeddings. Out-of-vocabulary (OOV) words are initialized randomly with Gaussian samples. CharCNN filter length is 1,3,5, each is 50 dimensions. All vectors including word embedding are updated during training. The cluster number K in discriminative block is 100. The Adam method BIBREF25 is used for optimization. And the first momentum is set to be 0.9 and the second 0.999. The initial learning rate is 0.0004 and the batch size is 32. We will half learning rate when meet a bad iteration, and the patience is 7. Our early stop evaluation is the EM and F1 score of validation set. All hidden states of GRUs, and TreeLSTMs are 500 dimensions, while word-level embedding $d_w$ is 300 dimensions. We set max length of document to 500, and drop the question-document pairs beyond this on training set. Explicit question-type dimension $d_{ET}$ is 50. We apply dropout to the Encoder layer and aggregation layer with a dropout rate of 0.5.
Results
Table 1 shows the official leaderboard on SQuAD test set when we submitted our system. Our model achieves a 68.73% EM score and 77.39% F1 score, which is ranked among the state of the art single models (without model ensembling).
Table 2 shows the ablation performances of various Q-code on the development set. Note that since the testset is hidden from us, we can only perform such an analysis on the development set. Our baseline model using no Q-code achieved a 68.00% and 77.36% EM and F1 scores, respectively. When we added the explicit question type T-code into the baseline model, the performance was improved slightly to 68.16%(EM) and 77.58%(F1). We then used TreeLSTM introduce syntactic parses for question representation and understanding (replacing simple question type as question understanding Q-code), which consistently shows further improvement. We further incorporated the soft adaptation. When letting the number of hidden question types ( $K$ ) to be 20, the performance improves to 68.73%/77.74% on EM and F1, respectively, which corresponds to the results of our model reported in Table 1 . Furthermore, after submitted our result, we have experimented with a large value of $K$ and found that when $K=100$ , we can achieve a better performance of 69.10%/78.38% on the development set.
Figure UID61 shows the EM/F1 scores of different question types while Figure UID62 is the question type amount distribution on the development set. In Figure UID61 we can see that the average EM/F1 of the "when" question is highest and those of the "why" question is the lowest. From Figure UID62 we can see the "what" question is the major class.
Figure 5 shows the composition of F1 score. Take our best model as an example, we observed a 78.38% F1 score on the whole development set, which can be separated into two parts: one is where F1 score equals to 100%, which means an exact match. This part accounts for 69.10% of the entire development set. And the other part accounts for 30.90%, of which the average F1 score is 30.03%. For the latter, we can further divide it into two sub-parts: one is where the F1 score equals to 0%, which means that predict answer is totally wrong. This part occupies 14.89% of the total development set. The other part accounts for 16.01% of the development set, of which average F1 score is 57.96%. From this analysis we can see that reducing the zero F1 score (14.89%) is potentially an important direction to further improve the system.
Conclusions
Closely modelling questions could be of importance for question answering and machine reading. In this paper, we introduce syntactic information to help encode questions in neural networks. We view and model different types of questions and the information shared among them as an adaptation task and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results over a competitive baseline. | Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction. |
fa218b297d9cdcae238cef71096752ce27ca8f4a | fa218b297d9cdcae238cef71096752ce27ca8f4a_0 | Q: What is the exact performance on SQUAD?
Text: Introduction
Enabling computers to understand given documents and answer questions about their content has recently attracted intensive interest, including but not limited to the efforts as in BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Many specific problems such as machine comprehension and question answering often involve modeling such question-document pairs.
The recent availability of relatively large training datasets (see Section "Related Work" for more details) has made it more feasible to train and estimate rather complex models in an end-to-end fashion for these problems, in which a whole model is fit directly with given question-answer tuples and the resulting model has shown to be rather effective.
In this paper, we take a closer look at modeling questions in such an end-to-end neural network framework, since we regard question understanding is of importance for such problems. We first introduced syntactic information to help encode questions. We then viewed and modelled different types of questions and the information shared among them as an adaptation problem and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results on our competitive baselines.
Related Work
Recent advance on reading comprehension and question answering has been closely associated with the availability of various datasets. BIBREF0 released the MCTest data consisting of 500 short, fictional open-domain stories and 2000 questions. The CNN/Daily Mail dataset BIBREF1 contains news articles for close style machine comprehension, in which only entities are removed and tested for comprehension. Children's Book Test (CBT) BIBREF2 leverages named entities, common nouns, verbs, and prepositions to test reading comprehension. The Stanford Question Answering Dataset (SQuAD) BIBREF3 is more recently released dataset, which consists of more than 100,000 questions for documents taken from Wikipedia across a wide range of topics. The question-answer pairs are annotated through crowdsourcing. Answers are spans of text marked in the original documents. In this paper, we use SQuAD to evaluate our models.
Many neural network models have been studied on the SQuAD task. BIBREF6 proposed match LSTM to associate documents and questions and adapted the so-called pointer Network BIBREF7 to determine the positions of the answer text spans. BIBREF8 proposed a dynamic chunk reader to extract and rank a set of answer candidates. BIBREF9 focused on word representation and presented a fine-grained gating mechanism to dynamically combine word-level and character-level representations based on the properties of words. BIBREF10 proposed a multi-perspective context matching (MPCM) model, which matched an encoded document and question from multiple perspectives. BIBREF11 proposed a dynamic decoder and so-called highway maxout network to improve the effectiveness of the decoder. The bi-directional attention flow (BIDAF) BIBREF12 used the bi-directional attention to obtain a question-aware context representation.
In this paper, we introduce syntactic information to encode questions with a specific form of recursive neural networks BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . More specifically, we explore a tree-structured LSTM BIBREF13 , BIBREF14 which extends the linear-chain long short-term memory (LSTM) BIBREF17 to a recursive structure, which has the potential to capture long-distance interactions over the structures.
Different types of questions are often used to seek for different types of information. For example, a "what" question could have very different property from that of a "why" question, while they may share information and need to be trained together instead of separately. We view this as a "adaptation" problem to let different types of questions share a basic model but still discriminate them when needed. Specifically, we are motivated by the ideas "i-vector" BIBREF18 in speech recognition, where neural network based adaptation is performed among different (groups) of speakers and we focused instead on different types of questions here.
The Baseline Model
Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction. Below we discuss these components in more details.
We concatenate embedding at two levels to represent a word: the character composition and word-level embedding. The character composition feeds all characters of a word into a convolutional neural network (CNN) BIBREF19 to obtain a representation for the word. And we use the pre-trained 300-D GloVe vectors BIBREF20 (see the experiment section for details) to initialize our word-level embedding. Each word is therefore represented as the concatenation of the character-composition vector and word-level embedding. This is performed on both questions and documents, resulting in two matrices: the $\mathbf {Q}^e \in \mathbb {R} ^{N\times d_w}$ for a question and the $\mathbf {D}^e \in \mathbb {R} ^{M\times d_w}$ for a document, where $N$ is the question length (number of word tokens), $M$ is the document length, and $d_w$ is the embedding dimensionality.
The above word representation focuses on representing individual words, and an input encoder here employs recurrent neural networks to obtain the representation of a word under its context. We use bi-directional GRU (BiGRU) BIBREF21 for both documents and questions.
$${\mathbf {Q}^c_i}&=\text{BiGRU}(\mathbf {Q}^e_i,i),\forall i \in [1, \dots , N] \\ {\mathbf {D}^c_j}&=\text{BiGRU}(\mathbf {D}^e_j,j),\forall j \in [1, \dots , M]$$ (Eq. 5)
A BiGRU runs a forward and backward GRU on a sequence starting from the left and the right end, respectively. By concatenating the hidden states of these two GRUs for each word, we obtain the a representation for a question or document: $\mathbf {Q}^c \in \mathbb {R} ^{N\times d_c}$ for a question and $\mathbf {D}^c \in \mathbb {R} ^{M\times d_c}$ for a document.
Questions and documents interact closely. As in most previous work, our framework use both soft attention over questions and that over documents to capture the interaction between them. More specifically, in this soft-alignment layer, we first feed the contextual representation matrix $\mathbf {Q}^c$ and $\mathbf {D}^c$ to obtain alignment matrix $\mathbf {U} \in \mathbb {R} ^{N\times M}$ :
$$\mathbf {U}_{ij} =\mathbf {Q}_i^c \cdot \mathbf {D}_j^{c\mathrm {T}}, \forall i \in [1, \dots , N], \forall j \in [1, \dots , M]$$ (Eq. 7)
Each $\mathbf {U}_{ij}$ represents the similarity between a question word $\mathbf {Q}_i^c$ and a document word $\mathbf {D}_j^c$ .
Word-level Q-code Similar as in BIBREF12 , we obtain a word-level Q-code. Specifically, for each document word $w_j$ , we find which words in the question are relevant to it. To this end, $\mathbf {a}_j\in \mathbb {R} ^{N}$ is computed with the following equation and used as a soft attention weight:
$$\mathbf {a}_j = softmax(\mathbf {U}_{:j}), \forall j \in [1, \dots , M]$$ (Eq. 8)
With the attention weights computed, we obtain the encoding of the question for each document word $w_j$ as follows, which we call word-level Q-code in this paper:
$$\mathbf {Q}^w=\mathbf {a}^{\mathrm {T}} \cdot \mathbf {Q}^{c} \in \mathbb {R} ^{M\times d_c}$$ (Eq. 9)
Question-based filtering To better explore question understanding, we design this question-based filtering layer. As detailed later, different question representation can be easily incorporated to this layer in addition to being used as a filter to find key information in the document based on the question. This layer is expandable with more complicated question modeling.
In the basic form of question-based filtering, for each question word $w_i$ , we find which words in the document are associated. Similar to $\mathbf {a}_j$ discussed above, we can obtain the attention weights on document words for each question word $w_i$ :
$$\mathbf {b}_i=softmax(\mathbf {U}_{i:})\in \mathbb {R} ^{M}, \forall i \in [1, \dots , N]$$ (Eq. 10)
By pooling $\mathbf {b}\in \mathbb {R} ^{N\times M}$ , we can obtain a question-based filtering weight $\mathbf {b}^f$ :
$$\mathbf {b}^f=norm(pooling(\mathbf {b})) \in \mathbb {R} ^{M}$$ (Eq. 11)
$$norm(\mathbf {x})=\frac{\mathbf {x}}{\sum _i x_i}$$ (Eq. 12)
where the specific pooling function we used include max-pooling and mean-pooling. Then the document softly filtered based on the corresponding question $\mathbf {D}^f$ can be calculated by:
$$\mathbf {D}_j^{f_{max}}=b^{f_{max}}_j \mathbf {D}_j^{c}, \forall j \in [1, \dots , M]$$ (Eq. 13)
$$\mathbf {D}_j^{f_{mean}}=b^{f_{mean}}_j \mathbf {D}_j^{c}, \forall j \in [1, \dots , M]$$ (Eq. 14)
Through concatenating the document representation $\mathbf {D}^c$ , word-level Q-code $\mathbf {Q}^w$ and question-filtered document $\mathbf {D}^f$ , we can finally obtain the alignment layer representation:
$$\mathbf {I}=[\mathbf {D}^c, \mathbf {Q}^w,\mathbf {D}^c \circ \mathbf {Q}^w,\mathbf {D}^c - \mathbf {Q}^w, \mathbf {D}^f, \mathbf {b}^{f_{max}}, \mathbf {b}^{f_{mean}}] \in \mathbb {R} ^{M \times (6d_c+2)}$$ (Eq. 16)
where " $\circ $ " stands for element-wise multiplication and " $-$ " is simply the vector subtraction.
After acquiring the local alignment representation, key information in document and question has been collected, and the aggregation layer is then performed to find answers. We use three BiGRU layers to model the process that aggregates local information to make the global decision to find the answer spans. We found a residual architecture BIBREF22 as described in Figure 2 is very effective in this aggregation process:
$$\mathbf {I}^1_i=\text{BiGRU}(\mathbf {I}_i)$$ (Eq. 18)
$$\mathbf {I}^2_i=\mathbf {I}^1_i + \text{BiGRU}(\mathbf {I}^1_i)$$ (Eq. 19)
The SQuAD QA task requires a span of text to answer a question. We use a pointer network BIBREF7 to predict the starting and end position of answers as in BIBREF6 . Different from their methods, we use a two-directional prediction to obtain the positions. For one direction, we first predict the starting position of the answer span followed by predicting the end position, which is implemented with the following equations:
$$P(s+)=softmax(W_{s+}\cdot I^3)$$ (Eq. 23)
$$P(e+)=softmax(W_{e+} \cdot I^3 + W_{h+} \cdot h_{s+})$$ (Eq. 24)
where $\mathbf {I}^3$ is inference layer output, $\mathbf {h}_{s+}$ is the hidden state of the first step, and all $\mathbf {W}$ are trainable matrices. We also perform this by predicting the end position first and then the starting position:
$$P(e-)=softmax(W_{e-}\cdot I^3)$$ (Eq. 25)
$$P(s-)=softmax(W_{s-} \cdot I^3 + W_{h-} \cdot h_{e-})$$ (Eq. 26)
We finally identify the span of an answer with the following equation:
$$P(s)=pooling([P(s+), P(s-)])$$ (Eq. 27)
$$P(e)=pooling([P(e+), P(e-)])$$ (Eq. 28)
We use the mean-pooling here as it is more effective on the development set than the alternatives such as the max-pooling.
Question Understanding and Adaptation
The interplay of syntax and semantics of natural language questions is of interest for question representation. We attempt to incorporate syntactic information in questions representation with TreeLSTM BIBREF13 , BIBREF14 . In general a TreeLSTM could perform semantic composition over given syntactic structures.
Unlike the chain-structured LSTM BIBREF17 , the TreeLSTM captures long-distance interaction on a tree. The update of a TreeLSTM node is described at a high level with Equation ( 31 ), and the detailed computation is described in (–). Specifically, the input of a TreeLSTM node is used to configure four gates: the input gate $\mathbf {i}_t$ , output gate $\mathbf {o}_t$ , and the two forget gates $\mathbf {f}_t^L$ for the left child input and $\mathbf {f}_t^R$ for the right. The memory cell $\mathbf {c}_t$ considers each child's cell vector, $\mathbf {c}_{t-1}^L$ and $\mathbf {c}_{t-1}^R$ , which are gated by the left forget gate $\mathbf {f}_t^L$ and right forget gate $\mathbf {f}_t^R$ , respectively.
$$\mathbf {h}_t &= \text{TreeLSTM}(\mathbf {x}_t, \mathbf {h}_{t-1}^L, \mathbf {h}_{t-1}^R), \\ \mathbf {h}_t &= \mathbf {o}_t \circ \tanh (\mathbf {c}_{t}),\\ \mathbf {o}_t &= \sigma (\mathbf {W}_o \mathbf {x}_t + \mathbf {U}_o^L \mathbf {h}_{t-1}^L + \mathbf {U}_o^R \mathbf {h}_{t-1}^R), \\\mathbf {c}_t &= \mathbf {f}_t^L \circ \mathbf {c}_{t-1}^L + \mathbf {f}_t^R \circ \mathbf {c}_{t-1}^R + \mathbf {i}_t \circ \mathbf {u}_t, \\\mathbf {f}_t^L &= \sigma (\mathbf {W}_f \mathbf {x}_t + \mathbf {U}_f^{LL} \mathbf {h}_{t-1}^L + \mathbf {U}_f^{LR} \mathbf {h}_{t-1}^R),\\ \mathbf {f}_t^R &= \sigma (\mathbf {W}_f \mathbf {x}_t + \mathbf {U}_f^{RL} \mathbf {h}_{t-1}^L + \mathbf {U}_f^{RR} \mathbf {h}_{t-1}^R), \\\mathbf {i}_t &= \sigma (\mathbf {W}_i \mathbf {x}_t + \mathbf {U}_i^L \mathbf {h}_{t-1}^L + \mathbf {U}_i^R \mathbf {h}_{t-1}^R), \\\mathbf {u}_t &= \tanh (\mathbf {W}_c \mathbf {x}_t + \mathbf {U}_c^L \mathbf {h}_{t-1}^L + \mathbf {U}_c^R \mathbf {h}_{t-1}^R),$$ (Eq. 31)
where $\sigma $ is the sigmoid function, $\circ $ is the element-wise multiplication of two vectors, and all $\mathbf {W}$ , $\mathbf {U}$ are trainable matrices.
To obtain the parse tree information, we use Stanford CoreNLP (PCFG Parser) BIBREF23 , BIBREF24 to produce a binarized constituency parse for each question and build the TreeLSTM based on the parse tree. The root node of TreeLSTM is used as the representation for the whole question. More specifically, we use it as TreeLSTM Q-code $\mathbf {Q}^{TL}\in \mathbb {R} ^{d_c}$ , by not only simply concatenating it to the alignment layer output but also using it as a question filter, just as we discussed in the question-based filtering section:
$$\mathbf {Q}^{TL}=\text{TreeLSTM}(\mathbf {Q}^e) \in \mathbb {R} ^{d_c}$$ (Eq. 32)
$$\mathbf {b}^{TL}=norm(\mathbf {Q}^{TL} \cdot \mathbf {D}^{c\mathrm {T}}) \in \mathbb {R} ^{M}$$ (Eq. 33)
where $\mathbf {I}_{new}$ is the new output of alignment layer, and function $repmat$ copies $\mathbf {Q}^{TL}$ for M times to fit with $\mathbf {I}$ .
Questions by nature are often composed to fulfill different types of information needs. For example, a "when" question seeks for different types of information (i.e., temporal information) than those for a "why" question. Different types of questions and the corresponding answers could potentially have different distributional regularity.
The previous models are often trained for all questions without explicitly discriminating different question types; however, for a target question, both the common features shared by all questions and the specific features for a specific type of question are further considered in this paper, as they could potentially obey different distributions. In this paper we further explicitly model different types of questions in the end-to-end training. We start from a simple way to first analyze the word frequency of all questions, and obtain top-10 most frequent question types: what, how, who, when, which, where, why, be, whose, and whom, in which be stands for the questions beginning with different forms of the word be such as is, am, and are. We explicitly encode question-type information to be an 11-dimensional one-hot vector (the top-10 question types and "other" question type). Each question type is with a trainable embedding vector. We call this explicit question type code, $\mathbf {ET}\in \mathbb {R} ^{d_{ET}}$ . Then the vector for each question type is tuned during training, and is added to the system with the following equation:
$$\mathbf {I}_{new}=[\mathbf {I}, repmat(\mathbf {ET})]$$ (Eq. 38)
As discussed, different types of questions and their answers may share common regularity and have separate property at the same time. We also view this as an adaptation problem in order to let different types of questions share a basic model but still discriminate them when needed. Specifically, we borrow ideas from speaker adaptation BIBREF18 in speech recognition, where neural-network-based adaptation is performed among different groups of speakers.
Conceptually we regard a type of questions as a group of acoustically similar speakers. Specifically we propose a question discriminative block or simply called a discriminative block (Figure 3 ) below to perform question adaptation. The main idea is described below:
$$\mathbf {x^\prime } = f([\mathbf {x}, \mathbf {\bar{x}}^c, \mathbf {\delta _x}])$$ (Eq. 40)
For each input question $\mathbf {x}$ , we can decompose it to two parts: the cluster it belong(i.e., question type) and the diverse in the cluster. The information of the cluster is encoded in a vector $\mathbf {\bar{x}}^c$ . In order to keep calculation differentiable, we compute the weight of all the clusters based on the distances of $\mathbf {x}$ and each cluster center vector, in stead of just choosing the closest cluster. Then the discriminative vector $\mathbf {\delta _x}$ with regard to these most relevant clusters are computed. All this information is combined to obtain the discriminative information. In order to keep the full information of input, we also copy the input question $\mathbf {x}$ , together with the acquired discriminative information, to a feed-forward layer to obtain a new representation $\mathbf {x^\prime }$ for the question.
More specifically, the adaptation algorithm contains two steps: adapting and updating, which is detailed as follows:
Adapting In the adapting step, we first compute the similarity score between an input question vector $\mathbf {x}\in \mathbb {R} ^{h}$ and each centroid vector of $K$ clusters $~\mathbf {\bar{x}}\in \mathbb {R} ^{K \times h}$ . Each cluster here models a question type. Unlike the explicit question type modeling discussed above, here we do not specify what question types we are modeling but let the system to learn. Specifically, we only need to pre-specific how many clusters, $K$ , we are modeling. The similarity between an input question and cluster centroid can be used to compute similarity weight $\mathbf {w}^a$ :
$$w_k^a = softmax(cos\_sim(\mathbf {x}, \mathbf {\bar{x}}_k), \alpha ), \forall k \in [1, \dots , K]$$ (Eq. 43)
$$cos\_sim(\mathbf {u}, \mathbf {v}) = \frac{<\mathbf {u},\mathbf {v}>}{||\mathbf {u}|| \cdot ||\mathbf {v}||}$$ (Eq. 44)
We set $\alpha $ equals 50 to make sure only closest class will have a high weight while maintain differentiable. Then we acquire a soft class-center vector $\mathbf {\bar{x}}^c$ :
$$\mathbf {\bar{x}}^c = \sum _k w^a_k \mathbf {\bar{x}}_k \in \mathbb {R} ^{h}$$ (Eq. 46)
We then compute a discriminative vector $\mathbf {\delta _x}$ between the input question with regard to the soft class-center vector:
$$\mathbf {\delta _x} = \mathbf {x} - \mathbf {\bar{x}}^c$$ (Eq. 47)
Note that $\bar{\mathbf {x}}^c$ here models the cluster information and $\mathbf {\delta _x}$ represents the discriminative information in the cluster. By feeding $\mathbf {x}$ , $\bar{\mathbf {x}}^c$ and $\mathbf {\delta _x}$ into feedforward layer with Relu, we obtain $\mathbf {x^{\prime }}\in \mathbb {R} ^{K}$ :
$$\mathbf {x^{\prime }} = Relu(\mathbf {W} \cdot [\mathbf {x},\bar{\mathbf {x}}^c,\mathbf {\delta _x}])$$ (Eq. 48)
With $\mathbf {x^{\prime }}$ ready, we can apply Discriminative Block to any question code and obtain its adaptation Q-code. In this paper, we use TreeLSTM Q-code as the input vector $\mathbf {x}$ , and obtain TreeLSTM adaptation Q-code $\mathbf {Q}^{TLa}\in \mathbb {R} ^{d_c}$ . Similar to TreeLSTM Q-code $\mathbf {Q}^{TL}$ , we concatenate $\mathbf {Q}^{TLa}$ to alignment output $\mathbf {I}$ and also use it as a question filter:
$$\mathbf {Q}^{TLa} = Relu(\mathbf {W} \cdot [\mathbf {Q}^{TL},\overline{\mathbf {Q}^{TL}}^c,\mathbf {\delta _{\mathbf {Q}^{TL}}}])$$ (Eq. 49)
$$\mathbf {b}^{TLa}=norm(\mathbf {Q}^{TLa} \cdot \mathbf {D}^{c\mathrm {T}}) \in \mathbb {R} ^{M}$$ (Eq. 50)
Updating The updating stage attempts to modify the center vectors of the $K$ clusters in order to fit each cluster to model different types of questions. The updating is performed according to the following formula:
$$\mathbf {\bar{x}^{\prime }}_k = (1-\beta \text{w}_k^a)\mathbf {\bar{x}}_k+\beta \text{w}_k^a\mathbf {x}, \forall k \in [1, \dots , K]$$ (Eq. 54)
In the equation, $\beta $ is an updating rate used to control the amount of each updating, and we set it to 0.01. When $\mathbf {x}$ is far away from $K$ -th cluster center $\mathbf {\bar{x}}_k$ , $\text{w}_k^a$ is close to be value 0 and the $k$ -th cluster center $\mathbf {\bar{x}}_k$ tends not to be updated. If $\mathbf {x}$ is instead close to the $j$ -th cluster center $\mathbf {\bar{x}}_j$ , $\mathbf {x}$0 is close to the value 1 and the centroid of the $\mathbf {x}$1 -th cluster $\mathbf {x}$2 will be updated more aggressively using $\mathbf {x}$3 .
Set-Up
We test our models on Stanford Question Answering Dataset (SQuAD) BIBREF3 . The SQuAD dataset consists of more than 100,000 questions annotated by crowdsourcing workers on a selected set of Wikipedia articles, and the answer to each question is a span of text in the Wikipedia articles. Training data includes 87,599 instances and validation set has 10,570 instances. The test data is hidden and kept by the organizer. The evaluation of SQuAD is Exact Match (EM) and F1 score.
We use pre-trained 300-D Glove 840B vectors BIBREF20 to initialize our word embeddings. Out-of-vocabulary (OOV) words are initialized randomly with Gaussian samples. CharCNN filter length is 1,3,5, each is 50 dimensions. All vectors including word embedding are updated during training. The cluster number K in discriminative block is 100. The Adam method BIBREF25 is used for optimization. And the first momentum is set to be 0.9 and the second 0.999. The initial learning rate is 0.0004 and the batch size is 32. We will half learning rate when meet a bad iteration, and the patience is 7. Our early stop evaluation is the EM and F1 score of validation set. All hidden states of GRUs, and TreeLSTMs are 500 dimensions, while word-level embedding $d_w$ is 300 dimensions. We set max length of document to 500, and drop the question-document pairs beyond this on training set. Explicit question-type dimension $d_{ET}$ is 50. We apply dropout to the Encoder layer and aggregation layer with a dropout rate of 0.5.
Results
Table 1 shows the official leaderboard on SQuAD test set when we submitted our system. Our model achieves a 68.73% EM score and 77.39% F1 score, which is ranked among the state of the art single models (without model ensembling).
Table 2 shows the ablation performances of various Q-code on the development set. Note that since the testset is hidden from us, we can only perform such an analysis on the development set. Our baseline model using no Q-code achieved a 68.00% and 77.36% EM and F1 scores, respectively. When we added the explicit question type T-code into the baseline model, the performance was improved slightly to 68.16%(EM) and 77.58%(F1). We then used TreeLSTM introduce syntactic parses for question representation and understanding (replacing simple question type as question understanding Q-code), which consistently shows further improvement. We further incorporated the soft adaptation. When letting the number of hidden question types ( $K$ ) to be 20, the performance improves to 68.73%/77.74% on EM and F1, respectively, which corresponds to the results of our model reported in Table 1 . Furthermore, after submitted our result, we have experimented with a large value of $K$ and found that when $K=100$ , we can achieve a better performance of 69.10%/78.38% on the development set.
Figure UID61 shows the EM/F1 scores of different question types while Figure UID62 is the question type amount distribution on the development set. In Figure UID61 we can see that the average EM/F1 of the "when" question is highest and those of the "why" question is the lowest. From Figure UID62 we can see the "what" question is the major class.
Figure 5 shows the composition of F1 score. Take our best model as an example, we observed a 78.38% F1 score on the whole development set, which can be separated into two parts: one is where F1 score equals to 100%, which means an exact match. This part accounts for 69.10% of the entire development set. And the other part accounts for 30.90%, of which the average F1 score is 30.03%. For the latter, we can further divide it into two sub-parts: one is where the F1 score equals to 0%, which means that predict answer is totally wrong. This part occupies 14.89% of the total development set. The other part accounts for 16.01% of the development set, of which average F1 score is 57.96%. From this analysis we can see that reducing the zero F1 score (14.89%) is potentially an important direction to further improve the system.
Conclusions
Closely modelling questions could be of importance for question answering and machine reading. In this paper, we introduce syntactic information to help encode questions in neural networks. We view and model different types of questions and the information shared among them as an adaptation task and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results over a competitive baseline. | Our model achieves a 68.73% EM score and 77.39% F1 score |
ff28d34d1aaa57e7ad553dba09fc924dc21dd728 | ff28d34d1aaa57e7ad553dba09fc924dc21dd728_0 | Q: What are their correlation results?
Text: Introduction
Quality Estimation (QE) is a term used in machine translation (MT) to refer to methods that measure the quality of automatically translated text without relying on human references BIBREF0, BIBREF1. In this study, we address QE for summarization. Our proposed model, Sum-QE, successfully predicts linguistic qualities of summaries that traditional evaluation metrics fail to capture BIBREF2, BIBREF3, BIBREF4, BIBREF5. Sum-QE predictions can be used for system development, to inform users of the quality of automatically produced summaries and other types of generated text, and to select the best among summaries output by multiple systems.
Sum-QE relies on the BERT language representation model BIBREF6. We use a pre-trained BERT model adding just a task-specific layer, and fine-tune the entire model on the task of predicting linguistic quality scores manually assigned to summaries. The five criteria addressed are given in Figure FIGREF2. We provide a thorough evaluation on three publicly available summarization datasets from NIST shared tasks, and compare the performance of our model to a wide variety of baseline methods capturing different aspects of linguistic quality. Sum-QE achieves very high correlations with human ratings, showing the ability of BERT to model linguistic qualities that relate to both text content and form.
Related Work
Summarization evaluation metrics like Pyramid BIBREF5 and ROUGE BIBREF3, BIBREF2 are recall-oriented; they basically measure the content from a model (reference) summary that is preserved in peer (system generated) summaries. Pyramid requires substantial human effort, even in its more recent versions that involve the use of word embeddings BIBREF8 and a lightweight crowdsourcing scheme BIBREF9. ROUGE is the most commonly used evaluation metric BIBREF10, BIBREF11, BIBREF12. Inspired by BLEU BIBREF4, it relies on common $n$-grams or subsequences between peer and model summaries. Many ROUGE versions are available, but it remains hard to decide which one to use BIBREF13. Being recall-based, ROUGE correlates well with Pyramid but poorly with linguistic qualities of summaries. BIBREF14 proposed a regression model for measuring summary quality without references. The scores of their model correlate well with Pyramid and Responsiveness, but text quality is only addressed indirectly.
Quality Estimation is well established in MT BIBREF15, BIBREF0, BIBREF1, BIBREF16, BIBREF17. QE methods provide a quality indicator for translation output at run-time without relying on human references, typically needed by MT evaluation metrics BIBREF4, BIBREF18. QE models for MT make use of large post-edited datasets, and apply machine learning methods to predict post-editing effort scores and quality (good/bad) labels.
We apply QE to summarization, focusing on linguistic qualities that reflect the readability and fluency of the generated texts. Since no post-edited datasets – like the ones used in MT – are available for summarization, we use instead the ratings assigned by human annotators with respect to a set of linguistic quality criteria. Our proposed models achieve high correlation with human judgments, showing that it is possible to estimate summary quality without human references.
Datasets
We use datasets from the NIST DUC-05, DUC-06 and DUC-07 shared tasks BIBREF7, BIBREF19, BIBREF20. Given a question and a cluster of newswire documents, the contestants were asked to generate a 250-word summary answering the question. DUC-05 contains 1,600 summaries (50 questions x 32 systems); in DUC-06, 1,750 summaries are included (50 questions x 35 systems); and DUC-07 has 1,440 summaries (45 questions x 32 systems).
The submitted summaries were manually evaluated in terms of content preservation using the Pyramid score, and according to five linguistic quality criteria ($\mathcal {Q}1, \dots , \mathcal {Q}5$), described in Figure FIGREF2, that do not involve comparison with a model summary. Annotators assigned scores on a five-point scale, with 1 and 5 indicating that the summary is bad or good with respect to a specific $\mathcal {Q}$. The overall score for a contestant with respect to a specific $\mathcal {Q}$ is the average of the manual scores assigned to the summaries generated by the contestant. Note that the DUC-04 shared task involved seven $\mathcal {Q}$s, but some of them were found to be highly overlapping and were grouped into five in subsequent years BIBREF20. We address these five criteria and use DUC data from 2005 onwards in our experiments.
Methods ::: The Sum-QE Model
In Sum-QE, each peer summary is converted into a sequence of token embeddings, consumed by an encoder $\mathcal {E}$ to produce a (dense vector) summary representation $h$. Then, a regressor $\mathcal {R}$ predicts a quality score $S_{\mathcal {Q}}$ as an affine transformation of $h$:
Non-linear regression could also be used, but a linear (affine) $\mathcal {R}$ already performs well. We use BERT as our main encoder and fine-tune it in three ways, which leads to three versions of Sum-QE.
Methods ::: The Sum-QE Model ::: Single-task (BERT-FT-S-1):
The first version of Sum-QE uses five separate estimators, one per quality score, each having its own encoder $\mathcal {E}_i$ (a separate BERT instance generating $h_i$) and regressor $\mathcal {R}_i$ (a separate linear regression layer on top of the corresponding BERT instance):
Methods ::: The Sum-QE Model ::: Multi-task with one regressor (BERT-FT-M-1):
The second version of Sum-QE uses one estimator to predict all five quality scores at once, from a single encoding $h$ of the summary, produced by a single BERT instance. The intuition is that $\mathcal {E}$ will learn to create richer representations so that $\mathcal {R}$ (an affine transformation of $h$ with 5 outputs) will be able to predict all quality scores:
where $\mathcal {R}(h)[i]$ is the $i$-th element of the vector returned by $\mathcal {R}$.
Methods ::: The Sum-QE Model ::: Multi-task with 5 regressors (BERT-FT-M-5):
The third version of Sum-QE is similar to BERT-FT-M-1, but we now use five different linear (affine) regressors, one per quality score:
Although BERT-FT-M-5 is mathematically equivalent to BERT-FT-M-1, in practice these two versions of Sum-QE produce different results because of implementation details related to how the losses of the regressors (five or one) are combined.
Methods ::: Baselines ::: BiGRU s with attention:
This is very similar to Sum-QE but now $\mathcal {E}$ is a stack of BiGRU s with self-attention BIBREF21, instead of a BERT instance. The final summary representation ($h$) is the sum of the resulting context-aware token embeddings ($h = \sum _i a_i h_i$) weighted by their self-attention scores ($a_i$). We again have three flavors: one single-task (BiGRU-ATT-S-1) and two multi-task (BiGRU-ATT-M-1 and BiGRU-ATT-M-5).
Methods ::: Baselines ::: ROUGE:
This baseline is the ROUGE version that performs best on each dataset, among the versions considered by BIBREF13. Although ROUGE focuses on surface similarities between peer and reference summaries, we would expect properties like grammaticality, referential clarity and coherence to be captured to some extent by ROUGE versions based on long $n$-grams or longest common subsequences.
Methods ::: Baselines ::: Language model (LM):
For a peer summary, a reasonable estimate of $\mathcal {Q}1$ (Grammaticality) is the perplexity returned by a pre-trained language model. We experiment with the pre-trained GPT-2 model BIBREF22, and with the probability estimates that BERT can produce for each token when the token is treated as masked (BERT-FR-LM). Given that the grammaticality of a summary can be corrupted by just a few bad tokens, we compute the perplexity by considering only the $k$ worst (lowest LM probability) tokens of the peer summary, where $k$ is a tuned hyper-parameter.
Methods ::: Baselines ::: Next sentence prediction:
BERT training relies on two tasks: predicting masked tokens and next sentence prediction. The latter seems to be aligned with the definitions of $\mathcal {Q}3$ (Referential Clarity), $\mathcal {Q}4$ (Focus) and $\mathcal {Q}5$ (Structure & Coherence). Intuitively, when a sentence follows another with high probability, it should involve clear referential expressions and preserve the focus and local coherence of the text. We, therefore, use a pre-trained BERT model (BERT-FR-NS) to calculate the sentence-level perplexity of each summary:
where $p(s_i|s_{i-1})$ is the probability that BERT assigns to the sequence of sentences $\left< s_{i-1}, s \right>$, and $n$ is the number of sentences in the peer summary.
Experiments
To evaluate our methods for a particular $\mathcal {Q}$, we calculate the average of the predicted scores for the summaries of each particular contestant, and the average of the corresponding manual scores assigned to the contestant's summaries. We measure the correlation between the two (predicted vs. manual) across all contestants using Spearman's $\rho $, Kendall's $\tau $ and Pearson's $r$.
We train and test the Sum-QE and BiGRU-ATT versions using a 3-fold procedure. In each fold, we train on two datasets (e.g., DUC-05, DUC-06) and test on the third (e.g., DUC-07). We follow the same procedure with the three BiGRU-based models. Hyper-perameters are tuned on a held out subset from the training set of each fold.
Results
Table TABREF23 shows Spearman's $\rho $, Kendall's $\tau $ and Pearson's $r$ for all datasets and models. The three fine-tuned BERT versions clearly outperform all other methods. Multi-task versions seem to perform better than single-task ones in most cases. Especially for $\mathcal {Q}4$ and $\mathcal {Q}5$, which are highly correlated, the multi-task BERT versions achieve the best overall results. BiGRU-ATT also benefits from multi-task learning.
The correlation of Sum-QE with human judgments is high or very high BIBREF23 for all $\mathcal {Q}$s in all datasets, apart from $\mathcal {Q}2$ in DUC-05 where it is only moderate. Manual scores for $\mathcal {Q}2$ in DUC-05 are the highest among all $\mathcal {Q}$s and years (between 4 and 5) and with the smallest standard deviation, as shown in Table TABREF24. Differences among systems are thus small in this respect, and although Sum-QE predicts scores in this range, it struggles to put them in the correct order, as illustrated in Figure FIGREF26.
BEST-ROUGE has a negative correlation with the ground-truth scores for $\mathcal {Q}$2 since it does not account for repetitions. The BiGRU-based models also reach their lowest performance on $\mathcal {Q}$2 in DUC-05. A possible reason for the higher relative performance of the BERT-based models, which achieve a moderate positive correlation, is that BiGRU captures long-distance relations less effectively than BERT, which utilizes Transformers BIBREF24 and has a larger receptive field. A possible improvement would be a stacked BiGRU, since the states of higher stack layers have a larger receptive field as well.
The BERT multi-task versions perform better with highly correlated qualities like $\mathcal {Q}4$ and $\mathcal {Q}5$ (as illustrated in Figures 2 to 4 in the supplementary material). However, there is not a clear winner among them. Mathematical equivalence does not lead to deterministic results, especially when random initialization and stochastic learning algorithms are involved. An in-depth exploration of this point would involve further investigation, which will be part of future work.
Conclusion and Future Work
We propose a novel Quality Estimation model for summarization which does not require human references to estimate the quality of automatically produced summaries. Sum-QE successfully predicts qualitative aspects of summaries that recall-oriented evaluation metrics fail to approximate. Leveraging powerful BERT representations, it achieves high correlations with human scores for most linguistic qualities rated, on three different datasets. Future work involves extending the Sum-QE model to capture content-related aspects, either in combination with existing evaluation metrics (like Pyramid and ROUGE) or, preferably, by identifying important information in the original text and modelling its preservation in the proposed summaries. This would preserve Sum-QE's independence from human references, a property of central importance in real-life usage scenarios and system development settings.
The datasets used in our experiments come from the NIST DUC shared tasks which comprise newswire articles. We believe that Sum-QE could be easily applied to other domains. A small amount of annotated data would be needed for fine-tuning – especially in domains with specialized vocabulary (e.g., biomedical) – but the model could also be used out of the box. A concrete estimation of performance in this setting will be part of future work. Also, the model could serve to estimate linguistic qualities other than the ones in the DUC dataset with mininum effort.
Finally, Sum-QE could serve to assess the quality of other types of texts, not only summaries. It could thus be applied to other text generation tasks, such as natural language generation and sentence compression.
Acknowledgments
We would like to thank the anonymous reviewers for their helpful feedback on this work. The work has been partly supported by the Research Center of the Athens University of Economics and Business, and by the French National Research Agency under project ANR-16-CE33-0013. | High correlation results range from 0.472 to 0.936 |
ae8354e67978b7c333094c36bf9d561ca0c2d286 | ae8354e67978b7c333094c36bf9d561ca0c2d286_0 | Q: What dataset do they use?
Text: Introduction
Quality Estimation (QE) is a term used in machine translation (MT) to refer to methods that measure the quality of automatically translated text without relying on human references BIBREF0, BIBREF1. In this study, we address QE for summarization. Our proposed model, Sum-QE, successfully predicts linguistic qualities of summaries that traditional evaluation metrics fail to capture BIBREF2, BIBREF3, BIBREF4, BIBREF5. Sum-QE predictions can be used for system development, to inform users of the quality of automatically produced summaries and other types of generated text, and to select the best among summaries output by multiple systems.
Sum-QE relies on the BERT language representation model BIBREF6. We use a pre-trained BERT model adding just a task-specific layer, and fine-tune the entire model on the task of predicting linguistic quality scores manually assigned to summaries. The five criteria addressed are given in Figure FIGREF2. We provide a thorough evaluation on three publicly available summarization datasets from NIST shared tasks, and compare the performance of our model to a wide variety of baseline methods capturing different aspects of linguistic quality. Sum-QE achieves very high correlations with human ratings, showing the ability of BERT to model linguistic qualities that relate to both text content and form.
Related Work
Summarization evaluation metrics like Pyramid BIBREF5 and ROUGE BIBREF3, BIBREF2 are recall-oriented; they basically measure the content from a model (reference) summary that is preserved in peer (system generated) summaries. Pyramid requires substantial human effort, even in its more recent versions that involve the use of word embeddings BIBREF8 and a lightweight crowdsourcing scheme BIBREF9. ROUGE is the most commonly used evaluation metric BIBREF10, BIBREF11, BIBREF12. Inspired by BLEU BIBREF4, it relies on common $n$-grams or subsequences between peer and model summaries. Many ROUGE versions are available, but it remains hard to decide which one to use BIBREF13. Being recall-based, ROUGE correlates well with Pyramid but poorly with linguistic qualities of summaries. BIBREF14 proposed a regression model for measuring summary quality without references. The scores of their model correlate well with Pyramid and Responsiveness, but text quality is only addressed indirectly.
Quality Estimation is well established in MT BIBREF15, BIBREF0, BIBREF1, BIBREF16, BIBREF17. QE methods provide a quality indicator for translation output at run-time without relying on human references, typically needed by MT evaluation metrics BIBREF4, BIBREF18. QE models for MT make use of large post-edited datasets, and apply machine learning methods to predict post-editing effort scores and quality (good/bad) labels.
We apply QE to summarization, focusing on linguistic qualities that reflect the readability and fluency of the generated texts. Since no post-edited datasets – like the ones used in MT – are available for summarization, we use instead the ratings assigned by human annotators with respect to a set of linguistic quality criteria. Our proposed models achieve high correlation with human judgments, showing that it is possible to estimate summary quality without human references.
Datasets
We use datasets from the NIST DUC-05, DUC-06 and DUC-07 shared tasks BIBREF7, BIBREF19, BIBREF20. Given a question and a cluster of newswire documents, the contestants were asked to generate a 250-word summary answering the question. DUC-05 contains 1,600 summaries (50 questions x 32 systems); in DUC-06, 1,750 summaries are included (50 questions x 35 systems); and DUC-07 has 1,440 summaries (45 questions x 32 systems).
The submitted summaries were manually evaluated in terms of content preservation using the Pyramid score, and according to five linguistic quality criteria ($\mathcal {Q}1, \dots , \mathcal {Q}5$), described in Figure FIGREF2, that do not involve comparison with a model summary. Annotators assigned scores on a five-point scale, with 1 and 5 indicating that the summary is bad or good with respect to a specific $\mathcal {Q}$. The overall score for a contestant with respect to a specific $\mathcal {Q}$ is the average of the manual scores assigned to the summaries generated by the contestant. Note that the DUC-04 shared task involved seven $\mathcal {Q}$s, but some of them were found to be highly overlapping and were grouped into five in subsequent years BIBREF20. We address these five criteria and use DUC data from 2005 onwards in our experiments.
Methods ::: The Sum-QE Model
In Sum-QE, each peer summary is converted into a sequence of token embeddings, consumed by an encoder $\mathcal {E}$ to produce a (dense vector) summary representation $h$. Then, a regressor $\mathcal {R}$ predicts a quality score $S_{\mathcal {Q}}$ as an affine transformation of $h$:
Non-linear regression could also be used, but a linear (affine) $\mathcal {R}$ already performs well. We use BERT as our main encoder and fine-tune it in three ways, which leads to three versions of Sum-QE.
Methods ::: The Sum-QE Model ::: Single-task (BERT-FT-S-1):
The first version of Sum-QE uses five separate estimators, one per quality score, each having its own encoder $\mathcal {E}_i$ (a separate BERT instance generating $h_i$) and regressor $\mathcal {R}_i$ (a separate linear regression layer on top of the corresponding BERT instance):
Methods ::: The Sum-QE Model ::: Multi-task with one regressor (BERT-FT-M-1):
The second version of Sum-QE uses one estimator to predict all five quality scores at once, from a single encoding $h$ of the summary, produced by a single BERT instance. The intuition is that $\mathcal {E}$ will learn to create richer representations so that $\mathcal {R}$ (an affine transformation of $h$ with 5 outputs) will be able to predict all quality scores:
where $\mathcal {R}(h)[i]$ is the $i$-th element of the vector returned by $\mathcal {R}$.
Methods ::: The Sum-QE Model ::: Multi-task with 5 regressors (BERT-FT-M-5):
The third version of Sum-QE is similar to BERT-FT-M-1, but we now use five different linear (affine) regressors, one per quality score:
Although BERT-FT-M-5 is mathematically equivalent to BERT-FT-M-1, in practice these two versions of Sum-QE produce different results because of implementation details related to how the losses of the regressors (five or one) are combined.
Methods ::: Baselines ::: BiGRU s with attention:
This is very similar to Sum-QE but now $\mathcal {E}$ is a stack of BiGRU s with self-attention BIBREF21, instead of a BERT instance. The final summary representation ($h$) is the sum of the resulting context-aware token embeddings ($h = \sum _i a_i h_i$) weighted by their self-attention scores ($a_i$). We again have three flavors: one single-task (BiGRU-ATT-S-1) and two multi-task (BiGRU-ATT-M-1 and BiGRU-ATT-M-5).
Methods ::: Baselines ::: ROUGE:
This baseline is the ROUGE version that performs best on each dataset, among the versions considered by BIBREF13. Although ROUGE focuses on surface similarities between peer and reference summaries, we would expect properties like grammaticality, referential clarity and coherence to be captured to some extent by ROUGE versions based on long $n$-grams or longest common subsequences.
Methods ::: Baselines ::: Language model (LM):
For a peer summary, a reasonable estimate of $\mathcal {Q}1$ (Grammaticality) is the perplexity returned by a pre-trained language model. We experiment with the pre-trained GPT-2 model BIBREF22, and with the probability estimates that BERT can produce for each token when the token is treated as masked (BERT-FR-LM). Given that the grammaticality of a summary can be corrupted by just a few bad tokens, we compute the perplexity by considering only the $k$ worst (lowest LM probability) tokens of the peer summary, where $k$ is a tuned hyper-parameter.
Methods ::: Baselines ::: Next sentence prediction:
BERT training relies on two tasks: predicting masked tokens and next sentence prediction. The latter seems to be aligned with the definitions of $\mathcal {Q}3$ (Referential Clarity), $\mathcal {Q}4$ (Focus) and $\mathcal {Q}5$ (Structure & Coherence). Intuitively, when a sentence follows another with high probability, it should involve clear referential expressions and preserve the focus and local coherence of the text. We, therefore, use a pre-trained BERT model (BERT-FR-NS) to calculate the sentence-level perplexity of each summary:
where $p(s_i|s_{i-1})$ is the probability that BERT assigns to the sequence of sentences $\left< s_{i-1}, s \right>$, and $n$ is the number of sentences in the peer summary.
Experiments
To evaluate our methods for a particular $\mathcal {Q}$, we calculate the average of the predicted scores for the summaries of each particular contestant, and the average of the corresponding manual scores assigned to the contestant's summaries. We measure the correlation between the two (predicted vs. manual) across all contestants using Spearman's $\rho $, Kendall's $\tau $ and Pearson's $r$.
We train and test the Sum-QE and BiGRU-ATT versions using a 3-fold procedure. In each fold, we train on two datasets (e.g., DUC-05, DUC-06) and test on the third (e.g., DUC-07). We follow the same procedure with the three BiGRU-based models. Hyper-perameters are tuned on a held out subset from the training set of each fold.
Results
Table TABREF23 shows Spearman's $\rho $, Kendall's $\tau $ and Pearson's $r$ for all datasets and models. The three fine-tuned BERT versions clearly outperform all other methods. Multi-task versions seem to perform better than single-task ones in most cases. Especially for $\mathcal {Q}4$ and $\mathcal {Q}5$, which are highly correlated, the multi-task BERT versions achieve the best overall results. BiGRU-ATT also benefits from multi-task learning.
The correlation of Sum-QE with human judgments is high or very high BIBREF23 for all $\mathcal {Q}$s in all datasets, apart from $\mathcal {Q}2$ in DUC-05 where it is only moderate. Manual scores for $\mathcal {Q}2$ in DUC-05 are the highest among all $\mathcal {Q}$s and years (between 4 and 5) and with the smallest standard deviation, as shown in Table TABREF24. Differences among systems are thus small in this respect, and although Sum-QE predicts scores in this range, it struggles to put them in the correct order, as illustrated in Figure FIGREF26.
BEST-ROUGE has a negative correlation with the ground-truth scores for $\mathcal {Q}$2 since it does not account for repetitions. The BiGRU-based models also reach their lowest performance on $\mathcal {Q}$2 in DUC-05. A possible reason for the higher relative performance of the BERT-based models, which achieve a moderate positive correlation, is that BiGRU captures long-distance relations less effectively than BERT, which utilizes Transformers BIBREF24 and has a larger receptive field. A possible improvement would be a stacked BiGRU, since the states of higher stack layers have a larger receptive field as well.
The BERT multi-task versions perform better with highly correlated qualities like $\mathcal {Q}4$ and $\mathcal {Q}5$ (as illustrated in Figures 2 to 4 in the supplementary material). However, there is not a clear winner among them. Mathematical equivalence does not lead to deterministic results, especially when random initialization and stochastic learning algorithms are involved. An in-depth exploration of this point would involve further investigation, which will be part of future work.
Conclusion and Future Work
We propose a novel Quality Estimation model for summarization which does not require human references to estimate the quality of automatically produced summaries. Sum-QE successfully predicts qualitative aspects of summaries that recall-oriented evaluation metrics fail to approximate. Leveraging powerful BERT representations, it achieves high correlations with human scores for most linguistic qualities rated, on three different datasets. Future work involves extending the Sum-QE model to capture content-related aspects, either in combination with existing evaluation metrics (like Pyramid and ROUGE) or, preferably, by identifying important information in the original text and modelling its preservation in the proposed summaries. This would preserve Sum-QE's independence from human references, a property of central importance in real-life usage scenarios and system development settings.
The datasets used in our experiments come from the NIST DUC shared tasks which comprise newswire articles. We believe that Sum-QE could be easily applied to other domains. A small amount of annotated data would be needed for fine-tuning – especially in domains with specialized vocabulary (e.g., biomedical) – but the model could also be used out of the box. A concrete estimation of performance in this setting will be part of future work. Also, the model could serve to estimate linguistic qualities other than the ones in the DUC dataset with mininum effort.
Finally, Sum-QE could serve to assess the quality of other types of texts, not only summaries. It could thus be applied to other text generation tasks, such as natural language generation and sentence compression.
Acknowledgments
We would like to thank the anonymous reviewers for their helpful feedback on this work. The work has been partly supported by the Research Center of the Athens University of Economics and Business, and by the French National Research Agency under project ANR-16-CE33-0013. | datasets from the NIST DUC-05, DUC-06 and DUC-07 shared tasks |
02348ab62957cb82067c589769c14d798b1ceec7 | 02348ab62957cb82067c589769c14d798b1ceec7_0 | Q: What simpler models do they look at?
Text: Introduction
Quality Estimation (QE) is a term used in machine translation (MT) to refer to methods that measure the quality of automatically translated text without relying on human references BIBREF0, BIBREF1. In this study, we address QE for summarization. Our proposed model, Sum-QE, successfully predicts linguistic qualities of summaries that traditional evaluation metrics fail to capture BIBREF2, BIBREF3, BIBREF4, BIBREF5. Sum-QE predictions can be used for system development, to inform users of the quality of automatically produced summaries and other types of generated text, and to select the best among summaries output by multiple systems.
Sum-QE relies on the BERT language representation model BIBREF6. We use a pre-trained BERT model adding just a task-specific layer, and fine-tune the entire model on the task of predicting linguistic quality scores manually assigned to summaries. The five criteria addressed are given in Figure FIGREF2. We provide a thorough evaluation on three publicly available summarization datasets from NIST shared tasks, and compare the performance of our model to a wide variety of baseline methods capturing different aspects of linguistic quality. Sum-QE achieves very high correlations with human ratings, showing the ability of BERT to model linguistic qualities that relate to both text content and form.
Related Work
Summarization evaluation metrics like Pyramid BIBREF5 and ROUGE BIBREF3, BIBREF2 are recall-oriented; they basically measure the content from a model (reference) summary that is preserved in peer (system generated) summaries. Pyramid requires substantial human effort, even in its more recent versions that involve the use of word embeddings BIBREF8 and a lightweight crowdsourcing scheme BIBREF9. ROUGE is the most commonly used evaluation metric BIBREF10, BIBREF11, BIBREF12. Inspired by BLEU BIBREF4, it relies on common $n$-grams or subsequences between peer and model summaries. Many ROUGE versions are available, but it remains hard to decide which one to use BIBREF13. Being recall-based, ROUGE correlates well with Pyramid but poorly with linguistic qualities of summaries. BIBREF14 proposed a regression model for measuring summary quality without references. The scores of their model correlate well with Pyramid and Responsiveness, but text quality is only addressed indirectly.
Quality Estimation is well established in MT BIBREF15, BIBREF0, BIBREF1, BIBREF16, BIBREF17. QE methods provide a quality indicator for translation output at run-time without relying on human references, typically needed by MT evaluation metrics BIBREF4, BIBREF18. QE models for MT make use of large post-edited datasets, and apply machine learning methods to predict post-editing effort scores and quality (good/bad) labels.
We apply QE to summarization, focusing on linguistic qualities that reflect the readability and fluency of the generated texts. Since no post-edited datasets – like the ones used in MT – are available for summarization, we use instead the ratings assigned by human annotators with respect to a set of linguistic quality criteria. Our proposed models achieve high correlation with human judgments, showing that it is possible to estimate summary quality without human references.
Datasets
We use datasets from the NIST DUC-05, DUC-06 and DUC-07 shared tasks BIBREF7, BIBREF19, BIBREF20. Given a question and a cluster of newswire documents, the contestants were asked to generate a 250-word summary answering the question. DUC-05 contains 1,600 summaries (50 questions x 32 systems); in DUC-06, 1,750 summaries are included (50 questions x 35 systems); and DUC-07 has 1,440 summaries (45 questions x 32 systems).
The submitted summaries were manually evaluated in terms of content preservation using the Pyramid score, and according to five linguistic quality criteria ($\mathcal {Q}1, \dots , \mathcal {Q}5$), described in Figure FIGREF2, that do not involve comparison with a model summary. Annotators assigned scores on a five-point scale, with 1 and 5 indicating that the summary is bad or good with respect to a specific $\mathcal {Q}$. The overall score for a contestant with respect to a specific $\mathcal {Q}$ is the average of the manual scores assigned to the summaries generated by the contestant. Note that the DUC-04 shared task involved seven $\mathcal {Q}$s, but some of them were found to be highly overlapping and were grouped into five in subsequent years BIBREF20. We address these five criteria and use DUC data from 2005 onwards in our experiments.
Methods ::: The Sum-QE Model
In Sum-QE, each peer summary is converted into a sequence of token embeddings, consumed by an encoder $\mathcal {E}$ to produce a (dense vector) summary representation $h$. Then, a regressor $\mathcal {R}$ predicts a quality score $S_{\mathcal {Q}}$ as an affine transformation of $h$:
Non-linear regression could also be used, but a linear (affine) $\mathcal {R}$ already performs well. We use BERT as our main encoder and fine-tune it in three ways, which leads to three versions of Sum-QE.
Methods ::: The Sum-QE Model ::: Single-task (BERT-FT-S-1):
The first version of Sum-QE uses five separate estimators, one per quality score, each having its own encoder $\mathcal {E}_i$ (a separate BERT instance generating $h_i$) and regressor $\mathcal {R}_i$ (a separate linear regression layer on top of the corresponding BERT instance):
Methods ::: The Sum-QE Model ::: Multi-task with one regressor (BERT-FT-M-1):
The second version of Sum-QE uses one estimator to predict all five quality scores at once, from a single encoding $h$ of the summary, produced by a single BERT instance. The intuition is that $\mathcal {E}$ will learn to create richer representations so that $\mathcal {R}$ (an affine transformation of $h$ with 5 outputs) will be able to predict all quality scores:
where $\mathcal {R}(h)[i]$ is the $i$-th element of the vector returned by $\mathcal {R}$.
Methods ::: The Sum-QE Model ::: Multi-task with 5 regressors (BERT-FT-M-5):
The third version of Sum-QE is similar to BERT-FT-M-1, but we now use five different linear (affine) regressors, one per quality score:
Although BERT-FT-M-5 is mathematically equivalent to BERT-FT-M-1, in practice these two versions of Sum-QE produce different results because of implementation details related to how the losses of the regressors (five or one) are combined.
Methods ::: Baselines ::: BiGRU s with attention:
This is very similar to Sum-QE but now $\mathcal {E}$ is a stack of BiGRU s with self-attention BIBREF21, instead of a BERT instance. The final summary representation ($h$) is the sum of the resulting context-aware token embeddings ($h = \sum _i a_i h_i$) weighted by their self-attention scores ($a_i$). We again have three flavors: one single-task (BiGRU-ATT-S-1) and two multi-task (BiGRU-ATT-M-1 and BiGRU-ATT-M-5).
Methods ::: Baselines ::: ROUGE:
This baseline is the ROUGE version that performs best on each dataset, among the versions considered by BIBREF13. Although ROUGE focuses on surface similarities between peer and reference summaries, we would expect properties like grammaticality, referential clarity and coherence to be captured to some extent by ROUGE versions based on long $n$-grams or longest common subsequences.
Methods ::: Baselines ::: Language model (LM):
For a peer summary, a reasonable estimate of $\mathcal {Q}1$ (Grammaticality) is the perplexity returned by a pre-trained language model. We experiment with the pre-trained GPT-2 model BIBREF22, and with the probability estimates that BERT can produce for each token when the token is treated as masked (BERT-FR-LM). Given that the grammaticality of a summary can be corrupted by just a few bad tokens, we compute the perplexity by considering only the $k$ worst (lowest LM probability) tokens of the peer summary, where $k$ is a tuned hyper-parameter.
Methods ::: Baselines ::: Next sentence prediction:
BERT training relies on two tasks: predicting masked tokens and next sentence prediction. The latter seems to be aligned with the definitions of $\mathcal {Q}3$ (Referential Clarity), $\mathcal {Q}4$ (Focus) and $\mathcal {Q}5$ (Structure & Coherence). Intuitively, when a sentence follows another with high probability, it should involve clear referential expressions and preserve the focus and local coherence of the text. We, therefore, use a pre-trained BERT model (BERT-FR-NS) to calculate the sentence-level perplexity of each summary:
where $p(s_i|s_{i-1})$ is the probability that BERT assigns to the sequence of sentences $\left< s_{i-1}, s \right>$, and $n$ is the number of sentences in the peer summary.
Experiments
To evaluate our methods for a particular $\mathcal {Q}$, we calculate the average of the predicted scores for the summaries of each particular contestant, and the average of the corresponding manual scores assigned to the contestant's summaries. We measure the correlation between the two (predicted vs. manual) across all contestants using Spearman's $\rho $, Kendall's $\tau $ and Pearson's $r$.
We train and test the Sum-QE and BiGRU-ATT versions using a 3-fold procedure. In each fold, we train on two datasets (e.g., DUC-05, DUC-06) and test on the third (e.g., DUC-07). We follow the same procedure with the three BiGRU-based models. Hyper-perameters are tuned on a held out subset from the training set of each fold.
Results
Table TABREF23 shows Spearman's $\rho $, Kendall's $\tau $ and Pearson's $r$ for all datasets and models. The three fine-tuned BERT versions clearly outperform all other methods. Multi-task versions seem to perform better than single-task ones in most cases. Especially for $\mathcal {Q}4$ and $\mathcal {Q}5$, which are highly correlated, the multi-task BERT versions achieve the best overall results. BiGRU-ATT also benefits from multi-task learning.
The correlation of Sum-QE with human judgments is high or very high BIBREF23 for all $\mathcal {Q}$s in all datasets, apart from $\mathcal {Q}2$ in DUC-05 where it is only moderate. Manual scores for $\mathcal {Q}2$ in DUC-05 are the highest among all $\mathcal {Q}$s and years (between 4 and 5) and with the smallest standard deviation, as shown in Table TABREF24. Differences among systems are thus small in this respect, and although Sum-QE predicts scores in this range, it struggles to put them in the correct order, as illustrated in Figure FIGREF26.
BEST-ROUGE has a negative correlation with the ground-truth scores for $\mathcal {Q}$2 since it does not account for repetitions. The BiGRU-based models also reach their lowest performance on $\mathcal {Q}$2 in DUC-05. A possible reason for the higher relative performance of the BERT-based models, which achieve a moderate positive correlation, is that BiGRU captures long-distance relations less effectively than BERT, which utilizes Transformers BIBREF24 and has a larger receptive field. A possible improvement would be a stacked BiGRU, since the states of higher stack layers have a larger receptive field as well.
The BERT multi-task versions perform better with highly correlated qualities like $\mathcal {Q}4$ and $\mathcal {Q}5$ (as illustrated in Figures 2 to 4 in the supplementary material). However, there is not a clear winner among them. Mathematical equivalence does not lead to deterministic results, especially when random initialization and stochastic learning algorithms are involved. An in-depth exploration of this point would involve further investigation, which will be part of future work.
Conclusion and Future Work
We propose a novel Quality Estimation model for summarization which does not require human references to estimate the quality of automatically produced summaries. Sum-QE successfully predicts qualitative aspects of summaries that recall-oriented evaluation metrics fail to approximate. Leveraging powerful BERT representations, it achieves high correlations with human scores for most linguistic qualities rated, on three different datasets. Future work involves extending the Sum-QE model to capture content-related aspects, either in combination with existing evaluation metrics (like Pyramid and ROUGE) or, preferably, by identifying important information in the original text and modelling its preservation in the proposed summaries. This would preserve Sum-QE's independence from human references, a property of central importance in real-life usage scenarios and system development settings.
The datasets used in our experiments come from the NIST DUC shared tasks which comprise newswire articles. We believe that Sum-QE could be easily applied to other domains. A small amount of annotated data would be needed for fine-tuning – especially in domains with specialized vocabulary (e.g., biomedical) – but the model could also be used out of the box. A concrete estimation of performance in this setting will be part of future work. Also, the model could serve to estimate linguistic qualities other than the ones in the DUC dataset with mininum effort.
Finally, Sum-QE could serve to assess the quality of other types of texts, not only summaries. It could thus be applied to other text generation tasks, such as natural language generation and sentence compression.
Acknowledgments
We would like to thank the anonymous reviewers for their helpful feedback on this work. The work has been partly supported by the Research Center of the Athens University of Economics and Business, and by the French National Research Agency under project ANR-16-CE33-0013. | BiGRU s with attention, ROUGE, Language model (LM), Next sentence prediction |
02348ab62957cb82067c589769c14d798b1ceec7 | 02348ab62957cb82067c589769c14d798b1ceec7_1 | Q: What simpler models do they look at?
Text: Introduction
Quality Estimation (QE) is a term used in machine translation (MT) to refer to methods that measure the quality of automatically translated text without relying on human references BIBREF0, BIBREF1. In this study, we address QE for summarization. Our proposed model, Sum-QE, successfully predicts linguistic qualities of summaries that traditional evaluation metrics fail to capture BIBREF2, BIBREF3, BIBREF4, BIBREF5. Sum-QE predictions can be used for system development, to inform users of the quality of automatically produced summaries and other types of generated text, and to select the best among summaries output by multiple systems.
Sum-QE relies on the BERT language representation model BIBREF6. We use a pre-trained BERT model adding just a task-specific layer, and fine-tune the entire model on the task of predicting linguistic quality scores manually assigned to summaries. The five criteria addressed are given in Figure FIGREF2. We provide a thorough evaluation on three publicly available summarization datasets from NIST shared tasks, and compare the performance of our model to a wide variety of baseline methods capturing different aspects of linguistic quality. Sum-QE achieves very high correlations with human ratings, showing the ability of BERT to model linguistic qualities that relate to both text content and form.
Related Work
Summarization evaluation metrics like Pyramid BIBREF5 and ROUGE BIBREF3, BIBREF2 are recall-oriented; they basically measure the content from a model (reference) summary that is preserved in peer (system generated) summaries. Pyramid requires substantial human effort, even in its more recent versions that involve the use of word embeddings BIBREF8 and a lightweight crowdsourcing scheme BIBREF9. ROUGE is the most commonly used evaluation metric BIBREF10, BIBREF11, BIBREF12. Inspired by BLEU BIBREF4, it relies on common $n$-grams or subsequences between peer and model summaries. Many ROUGE versions are available, but it remains hard to decide which one to use BIBREF13. Being recall-based, ROUGE correlates well with Pyramid but poorly with linguistic qualities of summaries. BIBREF14 proposed a regression model for measuring summary quality without references. The scores of their model correlate well with Pyramid and Responsiveness, but text quality is only addressed indirectly.
Quality Estimation is well established in MT BIBREF15, BIBREF0, BIBREF1, BIBREF16, BIBREF17. QE methods provide a quality indicator for translation output at run-time without relying on human references, typically needed by MT evaluation metrics BIBREF4, BIBREF18. QE models for MT make use of large post-edited datasets, and apply machine learning methods to predict post-editing effort scores and quality (good/bad) labels.
We apply QE to summarization, focusing on linguistic qualities that reflect the readability and fluency of the generated texts. Since no post-edited datasets – like the ones used in MT – are available for summarization, we use instead the ratings assigned by human annotators with respect to a set of linguistic quality criteria. Our proposed models achieve high correlation with human judgments, showing that it is possible to estimate summary quality without human references.
Datasets
We use datasets from the NIST DUC-05, DUC-06 and DUC-07 shared tasks BIBREF7, BIBREF19, BIBREF20. Given a question and a cluster of newswire documents, the contestants were asked to generate a 250-word summary answering the question. DUC-05 contains 1,600 summaries (50 questions x 32 systems); in DUC-06, 1,750 summaries are included (50 questions x 35 systems); and DUC-07 has 1,440 summaries (45 questions x 32 systems).
The submitted summaries were manually evaluated in terms of content preservation using the Pyramid score, and according to five linguistic quality criteria ($\mathcal {Q}1, \dots , \mathcal {Q}5$), described in Figure FIGREF2, that do not involve comparison with a model summary. Annotators assigned scores on a five-point scale, with 1 and 5 indicating that the summary is bad or good with respect to a specific $\mathcal {Q}$. The overall score for a contestant with respect to a specific $\mathcal {Q}$ is the average of the manual scores assigned to the summaries generated by the contestant. Note that the DUC-04 shared task involved seven $\mathcal {Q}$s, but some of them were found to be highly overlapping and were grouped into five in subsequent years BIBREF20. We address these five criteria and use DUC data from 2005 onwards in our experiments.
Methods ::: The Sum-QE Model
In Sum-QE, each peer summary is converted into a sequence of token embeddings, consumed by an encoder $\mathcal {E}$ to produce a (dense vector) summary representation $h$. Then, a regressor $\mathcal {R}$ predicts a quality score $S_{\mathcal {Q}}$ as an affine transformation of $h$:
Non-linear regression could also be used, but a linear (affine) $\mathcal {R}$ already performs well. We use BERT as our main encoder and fine-tune it in three ways, which leads to three versions of Sum-QE.
Methods ::: The Sum-QE Model ::: Single-task (BERT-FT-S-1):
The first version of Sum-QE uses five separate estimators, one per quality score, each having its own encoder $\mathcal {E}_i$ (a separate BERT instance generating $h_i$) and regressor $\mathcal {R}_i$ (a separate linear regression layer on top of the corresponding BERT instance):
Methods ::: The Sum-QE Model ::: Multi-task with one regressor (BERT-FT-M-1):
The second version of Sum-QE uses one estimator to predict all five quality scores at once, from a single encoding $h$ of the summary, produced by a single BERT instance. The intuition is that $\mathcal {E}$ will learn to create richer representations so that $\mathcal {R}$ (an affine transformation of $h$ with 5 outputs) will be able to predict all quality scores:
where $\mathcal {R}(h)[i]$ is the $i$-th element of the vector returned by $\mathcal {R}$.
Methods ::: The Sum-QE Model ::: Multi-task with 5 regressors (BERT-FT-M-5):
The third version of Sum-QE is similar to BERT-FT-M-1, but we now use five different linear (affine) regressors, one per quality score:
Although BERT-FT-M-5 is mathematically equivalent to BERT-FT-M-1, in practice these two versions of Sum-QE produce different results because of implementation details related to how the losses of the regressors (five or one) are combined.
Methods ::: Baselines ::: BiGRU s with attention:
This is very similar to Sum-QE but now $\mathcal {E}$ is a stack of BiGRU s with self-attention BIBREF21, instead of a BERT instance. The final summary representation ($h$) is the sum of the resulting context-aware token embeddings ($h = \sum _i a_i h_i$) weighted by their self-attention scores ($a_i$). We again have three flavors: one single-task (BiGRU-ATT-S-1) and two multi-task (BiGRU-ATT-M-1 and BiGRU-ATT-M-5).
Methods ::: Baselines ::: ROUGE:
This baseline is the ROUGE version that performs best on each dataset, among the versions considered by BIBREF13. Although ROUGE focuses on surface similarities between peer and reference summaries, we would expect properties like grammaticality, referential clarity and coherence to be captured to some extent by ROUGE versions based on long $n$-grams or longest common subsequences.
Methods ::: Baselines ::: Language model (LM):
For a peer summary, a reasonable estimate of $\mathcal {Q}1$ (Grammaticality) is the perplexity returned by a pre-trained language model. We experiment with the pre-trained GPT-2 model BIBREF22, and with the probability estimates that BERT can produce for each token when the token is treated as masked (BERT-FR-LM). Given that the grammaticality of a summary can be corrupted by just a few bad tokens, we compute the perplexity by considering only the $k$ worst (lowest LM probability) tokens of the peer summary, where $k$ is a tuned hyper-parameter.
Methods ::: Baselines ::: Next sentence prediction:
BERT training relies on two tasks: predicting masked tokens and next sentence prediction. The latter seems to be aligned with the definitions of $\mathcal {Q}3$ (Referential Clarity), $\mathcal {Q}4$ (Focus) and $\mathcal {Q}5$ (Structure & Coherence). Intuitively, when a sentence follows another with high probability, it should involve clear referential expressions and preserve the focus and local coherence of the text. We, therefore, use a pre-trained BERT model (BERT-FR-NS) to calculate the sentence-level perplexity of each summary:
where $p(s_i|s_{i-1})$ is the probability that BERT assigns to the sequence of sentences $\left< s_{i-1}, s \right>$, and $n$ is the number of sentences in the peer summary.
Experiments
To evaluate our methods for a particular $\mathcal {Q}$, we calculate the average of the predicted scores for the summaries of each particular contestant, and the average of the corresponding manual scores assigned to the contestant's summaries. We measure the correlation between the two (predicted vs. manual) across all contestants using Spearman's $\rho $, Kendall's $\tau $ and Pearson's $r$.
We train and test the Sum-QE and BiGRU-ATT versions using a 3-fold procedure. In each fold, we train on two datasets (e.g., DUC-05, DUC-06) and test on the third (e.g., DUC-07). We follow the same procedure with the three BiGRU-based models. Hyper-perameters are tuned on a held out subset from the training set of each fold.
Results
Table TABREF23 shows Spearman's $\rho $, Kendall's $\tau $ and Pearson's $r$ for all datasets and models. The three fine-tuned BERT versions clearly outperform all other methods. Multi-task versions seem to perform better than single-task ones in most cases. Especially for $\mathcal {Q}4$ and $\mathcal {Q}5$, which are highly correlated, the multi-task BERT versions achieve the best overall results. BiGRU-ATT also benefits from multi-task learning.
The correlation of Sum-QE with human judgments is high or very high BIBREF23 for all $\mathcal {Q}$s in all datasets, apart from $\mathcal {Q}2$ in DUC-05 where it is only moderate. Manual scores for $\mathcal {Q}2$ in DUC-05 are the highest among all $\mathcal {Q}$s and years (between 4 and 5) and with the smallest standard deviation, as shown in Table TABREF24. Differences among systems are thus small in this respect, and although Sum-QE predicts scores in this range, it struggles to put them in the correct order, as illustrated in Figure FIGREF26.
BEST-ROUGE has a negative correlation with the ground-truth scores for $\mathcal {Q}$2 since it does not account for repetitions. The BiGRU-based models also reach their lowest performance on $\mathcal {Q}$2 in DUC-05. A possible reason for the higher relative performance of the BERT-based models, which achieve a moderate positive correlation, is that BiGRU captures long-distance relations less effectively than BERT, which utilizes Transformers BIBREF24 and has a larger receptive field. A possible improvement would be a stacked BiGRU, since the states of higher stack layers have a larger receptive field as well.
The BERT multi-task versions perform better with highly correlated qualities like $\mathcal {Q}4$ and $\mathcal {Q}5$ (as illustrated in Figures 2 to 4 in the supplementary material). However, there is not a clear winner among them. Mathematical equivalence does not lead to deterministic results, especially when random initialization and stochastic learning algorithms are involved. An in-depth exploration of this point would involve further investigation, which will be part of future work.
Conclusion and Future Work
We propose a novel Quality Estimation model for summarization which does not require human references to estimate the quality of automatically produced summaries. Sum-QE successfully predicts qualitative aspects of summaries that recall-oriented evaluation metrics fail to approximate. Leveraging powerful BERT representations, it achieves high correlations with human scores for most linguistic qualities rated, on three different datasets. Future work involves extending the Sum-QE model to capture content-related aspects, either in combination with existing evaluation metrics (like Pyramid and ROUGE) or, preferably, by identifying important information in the original text and modelling its preservation in the proposed summaries. This would preserve Sum-QE's independence from human references, a property of central importance in real-life usage scenarios and system development settings.
The datasets used in our experiments come from the NIST DUC shared tasks which comprise newswire articles. We believe that Sum-QE could be easily applied to other domains. A small amount of annotated data would be needed for fine-tuning – especially in domains with specialized vocabulary (e.g., biomedical) – but the model could also be used out of the box. A concrete estimation of performance in this setting will be part of future work. Also, the model could serve to estimate linguistic qualities other than the ones in the DUC dataset with mininum effort.
Finally, Sum-QE could serve to assess the quality of other types of texts, not only summaries. It could thus be applied to other text generation tasks, such as natural language generation and sentence compression.
Acknowledgments
We would like to thank the anonymous reviewers for their helpful feedback on this work. The work has been partly supported by the Research Center of the Athens University of Economics and Business, and by the French National Research Agency under project ANR-16-CE33-0013. | BiGRUs with attention, ROUGE, Language model, and next sentence prediction |
3748787379b3a7d222c3a6254def3f5bfb93a60e | 3748787379b3a7d222c3a6254def3f5bfb93a60e_0 | Q: What linguistic quality aspects are addressed?
Text: Introduction
Quality Estimation (QE) is a term used in machine translation (MT) to refer to methods that measure the quality of automatically translated text without relying on human references BIBREF0, BIBREF1. In this study, we address QE for summarization. Our proposed model, Sum-QE, successfully predicts linguistic qualities of summaries that traditional evaluation metrics fail to capture BIBREF2, BIBREF3, BIBREF4, BIBREF5. Sum-QE predictions can be used for system development, to inform users of the quality of automatically produced summaries and other types of generated text, and to select the best among summaries output by multiple systems.
Sum-QE relies on the BERT language representation model BIBREF6. We use a pre-trained BERT model adding just a task-specific layer, and fine-tune the entire model on the task of predicting linguistic quality scores manually assigned to summaries. The five criteria addressed are given in Figure FIGREF2. We provide a thorough evaluation on three publicly available summarization datasets from NIST shared tasks, and compare the performance of our model to a wide variety of baseline methods capturing different aspects of linguistic quality. Sum-QE achieves very high correlations with human ratings, showing the ability of BERT to model linguistic qualities that relate to both text content and form.
Related Work
Summarization evaluation metrics like Pyramid BIBREF5 and ROUGE BIBREF3, BIBREF2 are recall-oriented; they basically measure the content from a model (reference) summary that is preserved in peer (system generated) summaries. Pyramid requires substantial human effort, even in its more recent versions that involve the use of word embeddings BIBREF8 and a lightweight crowdsourcing scheme BIBREF9. ROUGE is the most commonly used evaluation metric BIBREF10, BIBREF11, BIBREF12. Inspired by BLEU BIBREF4, it relies on common $n$-grams or subsequences between peer and model summaries. Many ROUGE versions are available, but it remains hard to decide which one to use BIBREF13. Being recall-based, ROUGE correlates well with Pyramid but poorly with linguistic qualities of summaries. BIBREF14 proposed a regression model for measuring summary quality without references. The scores of their model correlate well with Pyramid and Responsiveness, but text quality is only addressed indirectly.
Quality Estimation is well established in MT BIBREF15, BIBREF0, BIBREF1, BIBREF16, BIBREF17. QE methods provide a quality indicator for translation output at run-time without relying on human references, typically needed by MT evaluation metrics BIBREF4, BIBREF18. QE models for MT make use of large post-edited datasets, and apply machine learning methods to predict post-editing effort scores and quality (good/bad) labels.
We apply QE to summarization, focusing on linguistic qualities that reflect the readability and fluency of the generated texts. Since no post-edited datasets – like the ones used in MT – are available for summarization, we use instead the ratings assigned by human annotators with respect to a set of linguistic quality criteria. Our proposed models achieve high correlation with human judgments, showing that it is possible to estimate summary quality without human references.
Datasets
We use datasets from the NIST DUC-05, DUC-06 and DUC-07 shared tasks BIBREF7, BIBREF19, BIBREF20. Given a question and a cluster of newswire documents, the contestants were asked to generate a 250-word summary answering the question. DUC-05 contains 1,600 summaries (50 questions x 32 systems); in DUC-06, 1,750 summaries are included (50 questions x 35 systems); and DUC-07 has 1,440 summaries (45 questions x 32 systems).
The submitted summaries were manually evaluated in terms of content preservation using the Pyramid score, and according to five linguistic quality criteria ($\mathcal {Q}1, \dots , \mathcal {Q}5$), described in Figure FIGREF2, that do not involve comparison with a model summary. Annotators assigned scores on a five-point scale, with 1 and 5 indicating that the summary is bad or good with respect to a specific $\mathcal {Q}$. The overall score for a contestant with respect to a specific $\mathcal {Q}$ is the average of the manual scores assigned to the summaries generated by the contestant. Note that the DUC-04 shared task involved seven $\mathcal {Q}$s, but some of them were found to be highly overlapping and were grouped into five in subsequent years BIBREF20. We address these five criteria and use DUC data from 2005 onwards in our experiments.
Methods ::: The Sum-QE Model
In Sum-QE, each peer summary is converted into a sequence of token embeddings, consumed by an encoder $\mathcal {E}$ to produce a (dense vector) summary representation $h$. Then, a regressor $\mathcal {R}$ predicts a quality score $S_{\mathcal {Q}}$ as an affine transformation of $h$:
Non-linear regression could also be used, but a linear (affine) $\mathcal {R}$ already performs well. We use BERT as our main encoder and fine-tune it in three ways, which leads to three versions of Sum-QE.
Methods ::: The Sum-QE Model ::: Single-task (BERT-FT-S-1):
The first version of Sum-QE uses five separate estimators, one per quality score, each having its own encoder $\mathcal {E}_i$ (a separate BERT instance generating $h_i$) and regressor $\mathcal {R}_i$ (a separate linear regression layer on top of the corresponding BERT instance):
Methods ::: The Sum-QE Model ::: Multi-task with one regressor (BERT-FT-M-1):
The second version of Sum-QE uses one estimator to predict all five quality scores at once, from a single encoding $h$ of the summary, produced by a single BERT instance. The intuition is that $\mathcal {E}$ will learn to create richer representations so that $\mathcal {R}$ (an affine transformation of $h$ with 5 outputs) will be able to predict all quality scores:
where $\mathcal {R}(h)[i]$ is the $i$-th element of the vector returned by $\mathcal {R}$.
Methods ::: The Sum-QE Model ::: Multi-task with 5 regressors (BERT-FT-M-5):
The third version of Sum-QE is similar to BERT-FT-M-1, but we now use five different linear (affine) regressors, one per quality score:
Although BERT-FT-M-5 is mathematically equivalent to BERT-FT-M-1, in practice these two versions of Sum-QE produce different results because of implementation details related to how the losses of the regressors (five or one) are combined.
Methods ::: Baselines ::: BiGRU s with attention:
This is very similar to Sum-QE but now $\mathcal {E}$ is a stack of BiGRU s with self-attention BIBREF21, instead of a BERT instance. The final summary representation ($h$) is the sum of the resulting context-aware token embeddings ($h = \sum _i a_i h_i$) weighted by their self-attention scores ($a_i$). We again have three flavors: one single-task (BiGRU-ATT-S-1) and two multi-task (BiGRU-ATT-M-1 and BiGRU-ATT-M-5).
Methods ::: Baselines ::: ROUGE:
This baseline is the ROUGE version that performs best on each dataset, among the versions considered by BIBREF13. Although ROUGE focuses on surface similarities between peer and reference summaries, we would expect properties like grammaticality, referential clarity and coherence to be captured to some extent by ROUGE versions based on long $n$-grams or longest common subsequences.
Methods ::: Baselines ::: Language model (LM):
For a peer summary, a reasonable estimate of $\mathcal {Q}1$ (Grammaticality) is the perplexity returned by a pre-trained language model. We experiment with the pre-trained GPT-2 model BIBREF22, and with the probability estimates that BERT can produce for each token when the token is treated as masked (BERT-FR-LM). Given that the grammaticality of a summary can be corrupted by just a few bad tokens, we compute the perplexity by considering only the $k$ worst (lowest LM probability) tokens of the peer summary, where $k$ is a tuned hyper-parameter.
Methods ::: Baselines ::: Next sentence prediction:
BERT training relies on two tasks: predicting masked tokens and next sentence prediction. The latter seems to be aligned with the definitions of $\mathcal {Q}3$ (Referential Clarity), $\mathcal {Q}4$ (Focus) and $\mathcal {Q}5$ (Structure & Coherence). Intuitively, when a sentence follows another with high probability, it should involve clear referential expressions and preserve the focus and local coherence of the text. We, therefore, use a pre-trained BERT model (BERT-FR-NS) to calculate the sentence-level perplexity of each summary:
where $p(s_i|s_{i-1})$ is the probability that BERT assigns to the sequence of sentences $\left< s_{i-1}, s \right>$, and $n$ is the number of sentences in the peer summary.
Experiments
To evaluate our methods for a particular $\mathcal {Q}$, we calculate the average of the predicted scores for the summaries of each particular contestant, and the average of the corresponding manual scores assigned to the contestant's summaries. We measure the correlation between the two (predicted vs. manual) across all contestants using Spearman's $\rho $, Kendall's $\tau $ and Pearson's $r$.
We train and test the Sum-QE and BiGRU-ATT versions using a 3-fold procedure. In each fold, we train on two datasets (e.g., DUC-05, DUC-06) and test on the third (e.g., DUC-07). We follow the same procedure with the three BiGRU-based models. Hyper-perameters are tuned on a held out subset from the training set of each fold.
Results
Table TABREF23 shows Spearman's $\rho $, Kendall's $\tau $ and Pearson's $r$ for all datasets and models. The three fine-tuned BERT versions clearly outperform all other methods. Multi-task versions seem to perform better than single-task ones in most cases. Especially for $\mathcal {Q}4$ and $\mathcal {Q}5$, which are highly correlated, the multi-task BERT versions achieve the best overall results. BiGRU-ATT also benefits from multi-task learning.
The correlation of Sum-QE with human judgments is high or very high BIBREF23 for all $\mathcal {Q}$s in all datasets, apart from $\mathcal {Q}2$ in DUC-05 where it is only moderate. Manual scores for $\mathcal {Q}2$ in DUC-05 are the highest among all $\mathcal {Q}$s and years (between 4 and 5) and with the smallest standard deviation, as shown in Table TABREF24. Differences among systems are thus small in this respect, and although Sum-QE predicts scores in this range, it struggles to put them in the correct order, as illustrated in Figure FIGREF26.
BEST-ROUGE has a negative correlation with the ground-truth scores for $\mathcal {Q}$2 since it does not account for repetitions. The BiGRU-based models also reach their lowest performance on $\mathcal {Q}$2 in DUC-05. A possible reason for the higher relative performance of the BERT-based models, which achieve a moderate positive correlation, is that BiGRU captures long-distance relations less effectively than BERT, which utilizes Transformers BIBREF24 and has a larger receptive field. A possible improvement would be a stacked BiGRU, since the states of higher stack layers have a larger receptive field as well.
The BERT multi-task versions perform better with highly correlated qualities like $\mathcal {Q}4$ and $\mathcal {Q}5$ (as illustrated in Figures 2 to 4 in the supplementary material). However, there is not a clear winner among them. Mathematical equivalence does not lead to deterministic results, especially when random initialization and stochastic learning algorithms are involved. An in-depth exploration of this point would involve further investigation, which will be part of future work.
Conclusion and Future Work
We propose a novel Quality Estimation model for summarization which does not require human references to estimate the quality of automatically produced summaries. Sum-QE successfully predicts qualitative aspects of summaries that recall-oriented evaluation metrics fail to approximate. Leveraging powerful BERT representations, it achieves high correlations with human scores for most linguistic qualities rated, on three different datasets. Future work involves extending the Sum-QE model to capture content-related aspects, either in combination with existing evaluation metrics (like Pyramid and ROUGE) or, preferably, by identifying important information in the original text and modelling its preservation in the proposed summaries. This would preserve Sum-QE's independence from human references, a property of central importance in real-life usage scenarios and system development settings.
The datasets used in our experiments come from the NIST DUC shared tasks which comprise newswire articles. We believe that Sum-QE could be easily applied to other domains. A small amount of annotated data would be needed for fine-tuning – especially in domains with specialized vocabulary (e.g., biomedical) – but the model could also be used out of the box. A concrete estimation of performance in this setting will be part of future work. Also, the model could serve to estimate linguistic qualities other than the ones in the DUC dataset with mininum effort.
Finally, Sum-QE could serve to assess the quality of other types of texts, not only summaries. It could thus be applied to other text generation tasks, such as natural language generation and sentence compression.
Acknowledgments
We would like to thank the anonymous reviewers for their helpful feedback on this work. The work has been partly supported by the Research Center of the Athens University of Economics and Business, and by the French National Research Agency under project ANR-16-CE33-0013. | Grammaticality, non-redundancy, referential clarity, focus, structure & coherence |
6852217163ea678f2009d4726cb6bd03cf6a8f78 | 6852217163ea678f2009d4726cb6bd03cf6a8f78_0 | Q: What benchmark datasets are used for the link prediction task?
Text: Introduction
Knowledge graphs are usually collections of factual triples—(head entity, relation, tail entity), which represent human knowledge in a structured way. In the past few years, we have witnessed the great achievement of knowledge graphs in many areas, such as natural language processing BIBREF0, question answering BIBREF1, and recommendation systems BIBREF2.
Although commonly used knowledge graphs contain billions of triples, they still suffer from the incompleteness problem that a lot of valid triples are missing, as it is impractical to find all valid triples manually. Therefore, knowledge graph completion, also known as link prediction in knowledge graphs, has attracted much attention recently. Link prediction aims to automatically predict missing links between entities based on known links. It is a challenging task as we not only need to predict whether there is a relation between two entities, but also need to determine which relation it is.
Inspired by word embeddings BIBREF3 that can well capture semantic meaning of words, researchers turn to distributed representations of knowledge graphs (aka, knowledge graph embeddings) to deal with the link prediction problem. Knowledge graph embeddings regard entities and relations as low dimensional vectors (or matrices, tensors), which can be stored and computed efficiently. Moreover, like in the case of word embeddings, knowledge graph embeddings can preserve the semantics and inherent structures of entities and relations. Therefore, other than the link prediction task, knowledge graph embeddings can also be used in various downstream tasks, such as triple classification BIBREF4, relation inference BIBREF5, and search personalization BIBREF6.
The success of existing knowledge graph embedding models heavily relies on their ability to model connectivity patterns of the relations, such as symmetry/antisymmetry, inversion, and composition BIBREF7. For example, TransE BIBREF8, which represent relations as translations, can model the inversion and composition patterns. DistMult BIBREF9, which models the three-way interactions between head entities, relations, and tail entities, can model the symmetry pattern. RotatE BIBREF7, which represents entities as points in a complex space and relations as rotations, can model relation patterns including symmetry/antisymmetry, inversion, and composition. However, many existing models fail to model semantic hierarchies in knowledge graphs.
Semantic hierarchy is a ubiquitous property in knowledge graphs. For instance, WordNet BIBREF10 contains the triple [arbor/cassia/palm, hypernym, tree], where “tree” is at a higher level than “arbor/cassia/palm” in the hierarchy. Freebase BIBREF11 contains the triple [England, /location/location/contains, Pontefract/Lancaster], where “Pontefract/Lancaster” is at a lower level than “England” in the hierarchy. Although there exists some work that takes the hierarchy structures into account BIBREF12, BIBREF13, they usually require additional data or process to obtain the hierarchy information. Therefore, it is still challenging to find an approach that is capable of modeling the semantic hierarchy automatically and effectively.
In this paper, we propose a novel knowledge graph embedding model—namely, Hierarchy-Aware Knowledge Graph Embedding (HAKE). To model the semantic hierarchies, HAKE is expected to distinguish entities in two categories: (a) at different levels of the hierarchy; (b) at the same level of the hierarchy. Inspired by the fact that entities that have the hierarchical properties can be viewed as a tree, we can use the depth of a node (entity) to model different levels of the hierarchy. Thus, we use modulus information to model entities in the category (a), as the size of moduli can reflect the depth. Under the above settings, entities in the category (b) will have roughly the same modulus, which is hard to distinguish. Inspired by the fact that the points on the same circle can have different phases, we use phase information to model entities in the category (b). Combining the modulus and phase information, HAKE maps entities into the polar coordinate system, where the radial coordinate corresponds to the modulus information and the angular coordinate corresponds to the phase information. Experiments show that our proposed HAKE model can not only clearly distinguish the semantic hierarchies of entities, but also significantly and consistently outperform several state-of-the-art methods on the benchmark datasets.
Notations Throughout this paper, we use lower-case letters $h$, $r$, and $t$ to represent head entities, relations, and tail entities, respectively. The triplet $(h,r,t)$ denotes a fact in knowledge graphs. The corresponding boldface lower-case letters $\textbf {h}$, $\textbf {r}$ and $\textbf {t}$ denote the embeddings (vectors) of head entities, relations, and tail entities. The $i$-th entry of a vector $\textbf {h}$ is denoted as $[\textbf {h}]_i$. Let $k$ denote the embedding dimension.
Let $\circ :\mathbb {R}^n\times \mathbb {R}^n\rightarrow \mathbb {R}^n$ denote the Hadamard product between two vectors, that is,
and $\Vert \cdot \Vert _1$, $\Vert \cdot \Vert _2$ denote the $\ell _1$ and $\ell _2$ norm, respectively.
Related Work
In this section, we will describe the related work and the key differences between them and our work in two aspects—the model category and the way to model hierarchy structures in knowledge graphs.
Related Work ::: Model Category
Roughly speaking, we can divide knowledge graph embedding models into three categories—translational distance models, bilinear models, and neural network based models. Table TABREF2 exhibits several popular models.
Translational distance models describe relations as translations from source entities to target entities. TransE BIBREF8 supposes that entities and relations satisfy $\textbf {h}+\textbf {r}\approx \textbf {t}$, where $\textbf {h}, \textbf {r}, \textbf {t} \in \mathbb {R}^n$, and defines the corresponding score function as $f_r(\textbf {h},\textbf {t})=-\Vert \textbf {h}+\textbf {r}-\textbf {t}\Vert _{1/2}$. However, TransE does not perform well on 1-N, N-1 and N-N relations BIBREF14. TransH BIBREF14 overcomes the many-to-many relation problem by allowing entities to have distinct representations given different relations. The score function is defined as $f_r(\textbf {h},\textbf {t})=-\Vert \textbf {h}_{\perp }+\textbf {r}-\textbf {t}_{\perp }\Vert _2$, where $\textbf {h}_{\perp }$ and $\textbf {t}_{\perp }$ are the projections of entities onto relation-specific hyperplanes. ManifoldE BIBREF15 deals with many-to-many problems by relaxing the hypothesis $\textbf {h}+\textbf {r}\approx \textbf {t}$ to $\Vert \textbf {h}+\textbf {r}-\textbf {t}\Vert _2^2\approx \theta _r^2$ for each valid triple. In this way, the candidate entities can lie on a manifold instead of exact point. The corresponding score function is defined as $f_r(\textbf {h},\textbf {t})=-(\Vert \textbf {h}+\textbf {r}-\textbf {t}\Vert _2^2-\theta _r^2)^2$. More recently, to better model symmetric and antisymmetric relations, RotatE BIBREF7 defines each relation as a rotation from source entities to target entities in a complex vector space. The score function is defined as $f_r(\textbf {h},\textbf {t})=-\Vert \textbf {h}\circ \textbf {r}-\textbf {t}\Vert _1$, where $\textbf {h},\textbf {r},\textbf {t}\in \mathbb {C}^k$ and $|[\textbf {r}]_i|=1$.
Bilinear models product-based score functions to match latent semantics of entities and relations embodied in their vector space representations. RESCAL BIBREF16 represents each relation as a full rank matrix, and defines the score function as $f_r(\textbf {h},\textbf {t})=\textbf {h}^\top \textbf {M}_r \textbf {t}$, which can also be seen as a bilinear function. As full rank matrices are prone to overfitting, recent works turn to make additional assumptions on $\textbf {M}_r$. For example, DistMult BIBREF9 assumes $\textbf {M}_r$ to be a diagonal matrix, and ANALOGY BIBREF19 supposes that $\textbf {M}_r$ is normal. However, these simplified models are usually less expressive and not powerful enough for general knowledge graphs. Differently, ComplEx BIBREF17 extends DistMult by introducing complex-valued embeddings to better model asymmetric and inverse relations. HolE BIBREF20 combines the expressive power of RESCAL with the efficiency and simplicity of DistMult by using the circular correlation operation.
Neural network based models have received greater attention in recent years. For example, MLP BIBREF21 and NTN BIBREF22 use a fully connected neural network to determine the scores of given triples. ConvE BIBREF18 and ConvKB BIBREF23 employ convolutional neural networks to define score functions. Recently, graph convolutional networks are also introduced, as knowledge graphs obviously have graph structures BIBREF24.
Our proposed model HAKE belongs to the translational distance models. More specifically, HAKE shares similarities with RotatE BIBREF7, in which the authors claim that they use both modulus and phase information. However, there exist two major differences between RotatE and HAKE. Detailed differences are as follows.
The aims are different. RotatE aims to model the relation patterns including symmetry/antisymmetry, inversion, and composition. HAKE aims to model the semantic hierarchy, while it can also model all the relation patterns mentioned above.
The ways to use modulus information are different. RotatE models relations as rotations in the complex space, which encourages two linked entities to have the same modulus, no matter what the relation is. The different moduli in RotatE come from the inaccuracy in training. Instead, HAKE explicitly models the modulus information, which significantly outperforms RotatE in distinguishing entities at different levels of the hierarchy.
Related Work ::: The Ways to Model Hierarchy Structures
Another related problem is how to model hierarchy structures in knowledge graphs. Some recent work considers the problem in different ways. BIBREF25 embed entities and categories jointly into a semantic space and designs models for the concept categorization and dataless hierarchical classification tasks. BIBREF13 use clustering algorithms to model the hierarchical relation structures. BIBREF12 proposed TKRL, which embeds the type information into knowledge graph embeddings. That is, TKRL requires additional hierarchical type information for entities.
Different from the previous work, our work
considers the link prediction task, which is a more common task for knowledge graph embeddings;
can automatically learn the semantic hierarchy in knowledge graphs without using clustering algorithms;
does not require any additional information other than the triples in knowledge graphs.
The Proposed HAKE
In this section, we introduce our proposed model HAKE. We first introduce two categories of entities that reflect the semantic hierarchies in knowledge graphs. Afterwards, we introduce our proposed HAKE that can model entities in both of the categories.
The Proposed HAKE ::: Two Categories of Entities
To model the semantic hierarchies of knowledge graphs, a knowledge graph embedding model must be capable of distinguishing entities in the following two categories.
Entities at different levels of the hierarchy. For example, “mammal” and “dog”, “run” and ”move”.
Entities at the same level of the hierarchy. For example, “rose” and “peony”, “truck” and ”lorry”.
The Proposed HAKE ::: Hierarchy-Aware Knowledge Graph Embedding
To model both of the above categories, we propose a hierarchy-aware knowledge graph embedding model—HAKE. HAKE consists of two parts—the modulus part and the phase part—which aim to model entities in the two different categories, respectively. Figure FIGREF13 gives an illustration of the proposed model.
To distinguish embeddings in the different parts, we use $\textbf {e}_m$ ($\textbf {e}$ can be $\textbf {h}$ or $\textbf {t}$) and $\textbf {r}_m$ to denote the entity embedding and relation embedding in the modulus part, and use $\textbf {e}_p$ ($\textbf {e}$ can be $\textbf {h}$ or $\textbf {t}$) and $\textbf {r}_p$ to denote the entity embedding and relation embedding in the phase part.
The modulus part aims to model the entities at different levels of the hierarchy. Inspired by the fact that entities that have hierarchical property can be viewed as a tree, we can use the depth of a node (entity) to model different levels of the hierarchy. Therefore, we use modulus information to model entities in the category (a), as moduli can reflect the depth in a tree. Specifically, we regard each entry of $\textbf {h}_m$ and $\textbf {t}_m$, that is, $[\textbf {h}_m]_i$ and $[\textbf {t}_m]_i$, as a modulus, and regard each entry of $\textbf {r}_m$, that is, $[\textbf {r}]_i$, as a scaling transformation between two moduli. We can formulate the modulus part as follows:
The corresponding distance function is:
Note that we allow the entries of entity embeddings to be negative but restrict the entries of relation embeddings to be positive. This is because that the signs of entity embeddings can help us to predict whether there exists a relation between two entities. For example, if there exists a relation $r$ between $h$ and $t_1$, and no relation between $h$ and $t_2$, then $(h, r, t_1)$ is a positive sample and $(h, r, t_2)$ is a negative sample. Our goal is to minimize $d_r(\textbf {h}_m, \textbf {t}_{1,m})$ and maximize $d_r(\textbf {h}_m, \textbf {t}_{2,m})$, so as to make a clear distinction between positive and negative samples. For the positive sample, $[\textbf {h}]_i$ and $[\textbf {t}_1]_i$ tend to share the same sign, as $[\textbf {r}_m]_i>0$. For the negative sample, the signs of $[\textbf {h}_m]_i$ and $[\textbf {t}_{2,m}]_i$ can be different if we initialize their signs randomly. In this way, $d_r(\textbf {h}_m, \textbf {t}_{2,m})$ is more likely to be larger than $d_r(\textbf {h}_m, \textbf {t}_{1,m})$, which is exactly what we desire. We will validate this argument by experiments in Section 4 of the supplementary material.
Further, we can expect the entities at higher levels of the hierarchy to have smaller modulus, as these entities are more close to the root of the tree.
If we use only the modulus part to embed knowledge graphs, then the entities in the category (b) will have the same modulus. Moreover, suppose that $r$ is a relation that reflects the same semantic hierarchy, then $[\textbf {r}]_i$ will tend to be one, as $h\circ r\circ r=h$ holds for all $h$. Hence, embeddings of the entities in the category (b) tend to be the same, which makes it hard to distinguish these entities. Therefore, a new module is required to model the entities in the category (b).
The phase part aims to model the entities at the same level of the semantic hierarchy. Inspired by the fact that points on the same circle (that is, have the same modulus) can have different phases, we use phase information to distinguish entities in the category (b). Specifically, we regard each entry of $\textbf {h}_p$ and $\textbf {t}_p$, that is, $[\textbf {h}_p]_i$ and $[\textbf {t}_p]_i$ as a phase, and regard each entry of $\textbf {r}_p$, that is, $[\textbf {r}_p]_i$, as a phase transformation. We can formulate the phase part as follows:
The corresponding distance function is:
where $\sin (\cdot )$ is an operation that applies the sine function to each element of the input. Note that we use a sine function to measure the distance between phases instead of using $\Vert \textbf {h}_p+\textbf {r}_p-\textbf {t}_p\Vert _1$, as phases have periodic characteristic. This distance function shares the same formulation with that of pRotatE BIBREF7.
Combining the modulus part and the phase part, HAKE maps entities into the polar coordinate system, where the radial coordinate and the angular coordinates correspond to the modulus part and the phase part, respectively. That is, HAKE maps an entity $h$ to $[\textbf {h}_m;\textbf {h}_p]$, where $\textbf {h}_m$ and $\textbf {h}_p$ are generated by the modulus part and the phase part, respectively, and $[\,\cdot \,; \,\cdot \,]$ denotes the concatenation of two vectors. Obviously, $([\textbf {h}_m]_i,[\textbf {h}_p]_i)$ is a 2D point in the polar coordinate system. Specifically, we formulate HAKE as follows:
The distance function of HAKE is:
where $\lambda \in \mathbb {R}$ is a parameter that learned by the model. The corresponding score function is
When two entities have the same moduli, then the modulus part $d_{r,m}(\textbf {h}_m,\textbf {t}_m)=0$. However, the phase part $d_{r,p}(\textbf {h}_p,\textbf {t}_p)$ can be very different. By combining the modulus part and the phase part, HAKE can model the entities in both the category (a) and the category (b). Therefore, HAKE can model semantic hierarchies of knowledge graphs.
When evaluating the models, we find that adding a mixture bias to $d_{r,m}(\textbf {h},\textbf {t})$ can help to improve the performance of HAKE. The modified $d_{r,m}(\textbf {h},\textbf {t})$ is given by:
where $0<\textbf {r}^{\prime }_m<1$ is a vector that have the same dimension with $\textbf {r}_m$. Indeed, the above distance function is equivalent to
where $/$ denotes the element-wise division operation. If we let $\textbf {r}_m\leftarrow (1-\textbf {r}_m^{\prime })/(\textbf {r}_m+\textbf {r}_m^{\prime })$, then the modified distance function is exactly the same as the original one when compare the distances of different entity pairs. For notation convenience, we still use $d_{r,m}(\textbf {h},\textbf {t})=\Vert \textbf {h}_m\circ \textbf {r}_m-\textbf {t}_m\Vert _2$ to represent the modulus part. We will conduct ablation studies on the bias in the experiment section.
The Proposed HAKE ::: Loss Function
To train the model, we use the negative sampling loss functions with self-adversarial training BIBREF7:
where $\gamma $ is a fixed margin, $\sigma $ is the sigmoid function, and $(h^{\prime }_i,r,t^{\prime }_i)$ is the $i$th negative triple. Moreover,
is the probability distribution of sampling negative triples, where $\alpha $ is the temperature of sampling.
Experiments and Analysis
This section is organized as follows. First, we introduce the experimental settings in detail. Then, we show the effectiveness of our proposed model on three benchmark datasets. Finally, we analyze the embeddings generated by HAKE, and show the results of ablation studies. The code of HAKE is available on GitHub at https://github.com/MIRALab-USTC/KGE-HAKE.
Experiments and Analysis ::: Experimental Settings
We evaluate our proposed models on three commonly used knowledge graph datasets—WN18RR BIBREF26, FB15k-237 BIBREF18, and YAGO3-10 BIBREF27. Details of these datasets are summarized in Table TABREF18.
WN18RR, FB15k-237, and YAGO3-10 are subsets of WN18 BIBREF8, FB15k BIBREF8, and YAGO3 BIBREF27, respectively. As pointed out by BIBREF26 and BIBREF18, WN18 and FB15k suffer from the test set leakage problem. One can attain the state-of-the-art results even using a simple rule based model. Therefore, we use WN18RR and FB15k-237 as the benchmark datasets.
Evaluation Protocol Following BIBREF8, for each triple $(h,r,t)$ in the test dataset, we replace either the head entity $h$ or the tail entity $t$ with each candidate entity to create a set of candidate triples. We then rank the candidate triples in descending order by their scores. It is worth noting that we use the “Filtered” setting as in BIBREF8, which does not take any existing valid triples into accounts at ranking. We choose Mean Reciprocal Rank (MRR) and Hits at N (H@N) as the evaluation metrics. Higher MRR or H@N indicate better performance.
Training Protocol We use Adam BIBREF28 as the optimizer, and use grid search to find the best hyperparameters based on the performance on the validation datasets. To make the model easier to train, we add an additional coefficient to the distance function, i.e., $d_{r}(\textbf {h},\textbf {t})=\lambda _1d_{r,m}(\textbf {h}_m,\textbf {t}_m)+\lambda _2 d_{r,p}(\textbf {h}_p,\textbf {t}_p)$, where $\lambda _1,\lambda _2\in \mathbb {R}$.
Baseline Model One may argue that the phase part is unnecessary, as we can distinguish entities in the category (b) by allowing $[\textbf {r}]_i$ to be negative. We propose a model—ModE—that uses only the modulus part but allow $[\textbf {r}]_i<0$. Specifically, the distance function of ModE is
Experiments and Analysis ::: Main Results
In this part, we show the performance of our proposed models—HAKE and ModE—against existing state-of-the-art methods, including TransE BIBREF8, DistMult BIBREF9, ComplEx BIBREF17, ConvE BIBREF18, and RotatE BIBREF7.
Table TABREF19 shows the performance of HAKE, ModE, and several previous models. Our baseline model ModE shares similar simplicity with TransE, but significantly outperforms it on all datasets. Surprisingly, ModE even outperforms more complex models such as DistMult, ConvE and Complex on all datasets, and beats the state-of-the-art model—RotatE—on FB15k-237 and YAGO3-10 datasets, which demonstrates the great power of modulus information. Table TABREF19 also shows that our HAKE significantly outperforms existing state-of-the-art methods on all datasets.
WN18RR dataset consists of two kinds of relations: the symmetric relations such as $\_similar\_to$, which link entities in the category (b); other relations such as $\_hypernym$ and $\_member\_meronym$, which link entities in the category (a). Actually, RotatE can model entities in the category (b) very well BIBREF7. However, HAKE gains a 0.021 higher MRR, a 2.4% higher H@1, and a 2.4% higher H@3 against RotatE, respectively. The superior performance of HAKE compared with RotatE implies that our proposed model can better model different levels in the hierarchy.
FB15k-237 dataset has more complex relation types and fewer entities, compared with WN18RR and YAGO3-10. Although there are relations that reflect hierarchy in FB15k-237, there are also lots of relations, such as “/location/location/time_zones” and “/film/film/prequel”, that do not lead to hierarchy. The characteristic of this dataset accounts for why our proposed models doesn't outperform the previous state-of-the-art as much as that of WN18RR and YAGO3-10 datasets. However, the results also show that our models can gain better performance so long as there exists semantic hierarchies in knowledge graphs. As almost all knowledge graphs have such hierarchy structures, our model is widely applicable.
YAGO3-10 datasets contains entities with high relation-specific indegree BIBREF18. For example, the link prediction task $(?, hasGender, male)$ has over 1000 true answers, which makes the task challenging. Fortunately, we can regard “male” as an entity at higher level of the hierarchy and the predicted head entities as entities at lower level. In this way, YAGO3-10 is a dataset that clearly has semantic hierarchy property, and we can expect that our proposed models is capable of working well on this dataset. Table TABREF19 validates our expectation. Both ModE and HAKE significantly outperform the previous state-of-the-art. Notably, HAKE gains a 0.050 higher MRR, 6.0% higher H@1 and 4.6% higher H@3 than RotatE, respectively.
Experiments and Analysis ::: Analysis on Relation Embeddings
In this part, we first show that HAKE can effectively model the hierarchy structures by analyzing the moduli of relation embeddings. Then, we show that the phase part of HAKE can help us to distinguish entities at the same level of the hierarchy by analyzing the phases of relation embeddings.
In Figure FIGREF20, we plot the distribution histograms of moduli of six relations. These relations are drawn from WN18RR, FB15k-237, and YAGO3-10. Specifically, the relations in Figures FIGREF20a, FIGREF20c, FIGREF20e and FIGREF20f are drawn from WN18RR. The relation in Figure FIGREF20d is drawn from FB15k-237. The relation in Figure FIGREF20b is drawn from YAGO3-10. We divide the relations in Figure FIGREF20 into three groups.
Relations in Figures FIGREF20c and FIGREF20d connect the entities at the same level of the semantic hierarchy;
Relations in Figures FIGREF20a and FIGREF20b represent that tail entities are at higher levels than head entities of the hierarchy;
Relations in Figures FIGREF20e and FIGREF20f represent that tail entities are at lower levels than head entities of the hierarchy.
As described in the model description section, we expect entities at higher levels of the hierarchy to have small moduli. The experiments validate our expectation. For both ModE and HAKE, most entries of the relations in the group (A) take values around one, which leads to that the head entities and tail entities have approximately the same moduli. In the group (B), most entries of the relations take values less than one, which results in that the head entities have smaller moduli than the tail entities. The cases in the group (C) are contrary to that in the group (B). These results show that our model can capture the semantic hierarchies in knowledge graphs. Moreover, compared with ModE, the relation embeddings' moduli of HAKE have lower variances, which shows that HAKE can model hierarchies more clearly.
As mentioned above, relations in the group (A) reflect the same semantic hierarchy, and are expected to have the moduli of about one. Obviously, it is hard to distinguish entities linked by these relations only using the modulus part. In Figure FIGREF22, we plot the phases of the relations in the group (A). The results show that the entities at the same level of the hierarchy can be distinguished by their phases, as many phases have the values of $\pi $.
Experiments and Analysis ::: Analysis on Entity Embeddings
In this part, to further show that HAKE can capture the semantic hierarchies between entities, we visualize the embeddings of several entity pairs.
We plot the entity embeddings of two models: the previous state-of-the-art RotatE and our proposed HAKE. RotatE regards each entity as a group of complex numbers. As a complex number can be seen as a point on a 2D plane, we can plot the entity embeddings on a 2D plane. As for HAKE, we have mentioned that it maps entities into the polar coordinate system. Therefore, we can also plot the entity embeddings generated by HAKE on a 2D plane based on their polar coordinates. For a fair comparison, we set $k=500$. That is, each plot contains 500 points, and the actual dimension of entity embeddings is 1000. Note that we use the logarithmic scale to better display the differences between entity embeddings. As all the moduli have values less than one, after applying the logarithm operation, the larger radii in the figures will actually represent smaller modulus.
Figure FIGREF29 shows the visualization results of three triples from the WN18RR dataset. Compared with the tail entities, the head entities in Figures FIGREF29a, FIGREF29b, and FIGREF29c are at lower levels, similar levels, higher levels in the semantic hierarchy, respectively. We can see that there exist clear concentric circles in the visualization results of HAKE, which demonstrates that HAKE can effectively model the semantic hierarchies. However, in RotatE, the entity embeddings in all three subfigures are mixed, making it hard to distinguish entities at different levels in the hierarchy.
Experiments and Analysis ::: Ablation Studies
In this part, we conduct ablation studies on the modulus part and the phase part of HAKE, as well as the mixture bias item. Table TABREF26 shows the results on three benchmark datasets.
We can see that the bias can improve the performance of HAKE on nearly all metrics. Specifically, the bias improves the H@1 score of $4.7\%$ on YAGO3-10 dataset, which illustrates the effectiveness of the bias.
We also observe that the modulus part of HAKE does not perform well on all datasets, due to its inability to distinguish the entities at the same level of the hierarchy. When only using the phase part, HAKE degenerates to the pRotatE model BIBREF7. It performs better than the modulus part, because it can well model entities at the same level of the hierarchy. However, our HAKE model significantly outperforms the modulus part and the phase part on all datasets, which demonstrates the importance to combine the two parts for modeling semantic hierarchies in knowledge graphs.
Experiments and Analysis ::: Comparison with Other Related Work
We compare our models with TKRL models BIBREF12, which also aim to model the hierarchy structures. For the difference between HAKE and TKRL, please refer to the Related Work section. Table TABREF27 shows the H@10 scores of HAKE and TKRLs on FB15k dataset. The best performance of TKRL is .734 obtained by the WHE+STC version, while the H@10 score of our HAKE model is .884. The results show that HAKE significantly outperforms TKRL, though it does not require additional information.
Conclusion
To model the semantic hierarchies in knowledge graphs, we propose a novel hierarchy-aware knowledge graph embedding model—HAKE—which maps entities into the polar coordinate system. Experiments show that our proposed HAKE significantly outperforms several existing state-of-the-art methods on benchmark datasets for the link prediction task. A further investigation shows that HAKE is capable of modeling entities at both different levels and the same levels in the semantic hierarchies.
Appendix
In this appendix, we will provide analysis on relation patterns, negative entity embeddings, and moduli of entity embeddings. Then, we will give more visualization results on semantic hierarchies.
A. Analysis on Relation Patterns
In this section, we prove that our HAKE model can infer the (anti)symmetry, inversion and composition relation patterns. Detailed propositions and their proofs are as follows.
Proposition 1 HAKE can infer the (anti)symmetry pattern.
If $r(x, y)$ and $r(y, x)$ hold, we have
Then we have
Otherwise, if $r(x, y)$ and $\lnot r(y, x)$ hold, we have
Proposition 2 HAKE can infer the inversion pattern.
If $r_1(x, y)$ and $r_2(y, x)$ hold, we have
Then, we have
Proposition 3 HAKE can infer the composition pattern.
If $r_1(x, z)$, $r_2(x, y)$ and $r_3(y, z)$ hold, we have
Then we have
B. Analysis on Negative Entity Embeddings
We denote the linked entity pairs as the set of entity pairs linked by some relation, and denote the unlinked entity pairs as the set of entity pairs that no triple contains in the train/valid/test dataset. It is worth noting that the unlinked paris may contain valid triples, as the knowledge graph is incomplete. For both the linked and the unlinked entity pairs, we count the embedding entries of two entities that have different signs. Figure FIGREF34 shows the result.
For the linked entity pairs, as we expected, most of the entries have the same sign. Due to the large amount of unlinked entity pairs, we randomly sample a part of them for plotting. For the unlinked entity pairs, around half of the entries have different signs, which is consistent with the random initialization. The results support our hypothesis that the negative signs of entity embeddings can help our model to distinguish positive and negative triples.
C. Analysis on Moduli of Entity Embeddings
Figure FIGREF37 shows the modulus of entity embeddings. We can observe that RotatE encourages the modulus of embeddings to be the same, as the relations are modeled as rotations in a complex space. Compared with RotatE, the modulus of entity embeddings in HAKE are more dispersed, making it to have more potential to model the semantic hierarchies.
D. More Results on Semantic Hierarchies
In this part, we visualize more triples from WN18RR. We plot the head and tail entities on 2D planes using the same method as that in the main text. The visualization results are in Figure FIGREF41, where the subcaptions demonstrate the corresponding triples. The figures show that, compared with RotatE, our HAKE model can better model the entities both in different hierarchies and in the same hierarchy. | WN18RR, FB15k-237, YAGO3-10 |
6852217163ea678f2009d4726cb6bd03cf6a8f78 | 6852217163ea678f2009d4726cb6bd03cf6a8f78_1 | Q: What benchmark datasets are used for the link prediction task?
Text: Introduction
Knowledge graphs are usually collections of factual triples—(head entity, relation, tail entity), which represent human knowledge in a structured way. In the past few years, we have witnessed the great achievement of knowledge graphs in many areas, such as natural language processing BIBREF0, question answering BIBREF1, and recommendation systems BIBREF2.
Although commonly used knowledge graphs contain billions of triples, they still suffer from the incompleteness problem that a lot of valid triples are missing, as it is impractical to find all valid triples manually. Therefore, knowledge graph completion, also known as link prediction in knowledge graphs, has attracted much attention recently. Link prediction aims to automatically predict missing links between entities based on known links. It is a challenging task as we not only need to predict whether there is a relation between two entities, but also need to determine which relation it is.
Inspired by word embeddings BIBREF3 that can well capture semantic meaning of words, researchers turn to distributed representations of knowledge graphs (aka, knowledge graph embeddings) to deal with the link prediction problem. Knowledge graph embeddings regard entities and relations as low dimensional vectors (or matrices, tensors), which can be stored and computed efficiently. Moreover, like in the case of word embeddings, knowledge graph embeddings can preserve the semantics and inherent structures of entities and relations. Therefore, other than the link prediction task, knowledge graph embeddings can also be used in various downstream tasks, such as triple classification BIBREF4, relation inference BIBREF5, and search personalization BIBREF6.
The success of existing knowledge graph embedding models heavily relies on their ability to model connectivity patterns of the relations, such as symmetry/antisymmetry, inversion, and composition BIBREF7. For example, TransE BIBREF8, which represent relations as translations, can model the inversion and composition patterns. DistMult BIBREF9, which models the three-way interactions between head entities, relations, and tail entities, can model the symmetry pattern. RotatE BIBREF7, which represents entities as points in a complex space and relations as rotations, can model relation patterns including symmetry/antisymmetry, inversion, and composition. However, many existing models fail to model semantic hierarchies in knowledge graphs.
Semantic hierarchy is a ubiquitous property in knowledge graphs. For instance, WordNet BIBREF10 contains the triple [arbor/cassia/palm, hypernym, tree], where “tree” is at a higher level than “arbor/cassia/palm” in the hierarchy. Freebase BIBREF11 contains the triple [England, /location/location/contains, Pontefract/Lancaster], where “Pontefract/Lancaster” is at a lower level than “England” in the hierarchy. Although there exists some work that takes the hierarchy structures into account BIBREF12, BIBREF13, they usually require additional data or process to obtain the hierarchy information. Therefore, it is still challenging to find an approach that is capable of modeling the semantic hierarchy automatically and effectively.
In this paper, we propose a novel knowledge graph embedding model—namely, Hierarchy-Aware Knowledge Graph Embedding (HAKE). To model the semantic hierarchies, HAKE is expected to distinguish entities in two categories: (a) at different levels of the hierarchy; (b) at the same level of the hierarchy. Inspired by the fact that entities that have the hierarchical properties can be viewed as a tree, we can use the depth of a node (entity) to model different levels of the hierarchy. Thus, we use modulus information to model entities in the category (a), as the size of moduli can reflect the depth. Under the above settings, entities in the category (b) will have roughly the same modulus, which is hard to distinguish. Inspired by the fact that the points on the same circle can have different phases, we use phase information to model entities in the category (b). Combining the modulus and phase information, HAKE maps entities into the polar coordinate system, where the radial coordinate corresponds to the modulus information and the angular coordinate corresponds to the phase information. Experiments show that our proposed HAKE model can not only clearly distinguish the semantic hierarchies of entities, but also significantly and consistently outperform several state-of-the-art methods on the benchmark datasets.
Notations Throughout this paper, we use lower-case letters $h$, $r$, and $t$ to represent head entities, relations, and tail entities, respectively. The triplet $(h,r,t)$ denotes a fact in knowledge graphs. The corresponding boldface lower-case letters $\textbf {h}$, $\textbf {r}$ and $\textbf {t}$ denote the embeddings (vectors) of head entities, relations, and tail entities. The $i$-th entry of a vector $\textbf {h}$ is denoted as $[\textbf {h}]_i$. Let $k$ denote the embedding dimension.
Let $\circ :\mathbb {R}^n\times \mathbb {R}^n\rightarrow \mathbb {R}^n$ denote the Hadamard product between two vectors, that is,
and $\Vert \cdot \Vert _1$, $\Vert \cdot \Vert _2$ denote the $\ell _1$ and $\ell _2$ norm, respectively.
Related Work
In this section, we will describe the related work and the key differences between them and our work in two aspects—the model category and the way to model hierarchy structures in knowledge graphs.
Related Work ::: Model Category
Roughly speaking, we can divide knowledge graph embedding models into three categories—translational distance models, bilinear models, and neural network based models. Table TABREF2 exhibits several popular models.
Translational distance models describe relations as translations from source entities to target entities. TransE BIBREF8 supposes that entities and relations satisfy $\textbf {h}+\textbf {r}\approx \textbf {t}$, where $\textbf {h}, \textbf {r}, \textbf {t} \in \mathbb {R}^n$, and defines the corresponding score function as $f_r(\textbf {h},\textbf {t})=-\Vert \textbf {h}+\textbf {r}-\textbf {t}\Vert _{1/2}$. However, TransE does not perform well on 1-N, N-1 and N-N relations BIBREF14. TransH BIBREF14 overcomes the many-to-many relation problem by allowing entities to have distinct representations given different relations. The score function is defined as $f_r(\textbf {h},\textbf {t})=-\Vert \textbf {h}_{\perp }+\textbf {r}-\textbf {t}_{\perp }\Vert _2$, where $\textbf {h}_{\perp }$ and $\textbf {t}_{\perp }$ are the projections of entities onto relation-specific hyperplanes. ManifoldE BIBREF15 deals with many-to-many problems by relaxing the hypothesis $\textbf {h}+\textbf {r}\approx \textbf {t}$ to $\Vert \textbf {h}+\textbf {r}-\textbf {t}\Vert _2^2\approx \theta _r^2$ for each valid triple. In this way, the candidate entities can lie on a manifold instead of exact point. The corresponding score function is defined as $f_r(\textbf {h},\textbf {t})=-(\Vert \textbf {h}+\textbf {r}-\textbf {t}\Vert _2^2-\theta _r^2)^2$. More recently, to better model symmetric and antisymmetric relations, RotatE BIBREF7 defines each relation as a rotation from source entities to target entities in a complex vector space. The score function is defined as $f_r(\textbf {h},\textbf {t})=-\Vert \textbf {h}\circ \textbf {r}-\textbf {t}\Vert _1$, where $\textbf {h},\textbf {r},\textbf {t}\in \mathbb {C}^k$ and $|[\textbf {r}]_i|=1$.
Bilinear models product-based score functions to match latent semantics of entities and relations embodied in their vector space representations. RESCAL BIBREF16 represents each relation as a full rank matrix, and defines the score function as $f_r(\textbf {h},\textbf {t})=\textbf {h}^\top \textbf {M}_r \textbf {t}$, which can also be seen as a bilinear function. As full rank matrices are prone to overfitting, recent works turn to make additional assumptions on $\textbf {M}_r$. For example, DistMult BIBREF9 assumes $\textbf {M}_r$ to be a diagonal matrix, and ANALOGY BIBREF19 supposes that $\textbf {M}_r$ is normal. However, these simplified models are usually less expressive and not powerful enough for general knowledge graphs. Differently, ComplEx BIBREF17 extends DistMult by introducing complex-valued embeddings to better model asymmetric and inverse relations. HolE BIBREF20 combines the expressive power of RESCAL with the efficiency and simplicity of DistMult by using the circular correlation operation.
Neural network based models have received greater attention in recent years. For example, MLP BIBREF21 and NTN BIBREF22 use a fully connected neural network to determine the scores of given triples. ConvE BIBREF18 and ConvKB BIBREF23 employ convolutional neural networks to define score functions. Recently, graph convolutional networks are also introduced, as knowledge graphs obviously have graph structures BIBREF24.
Our proposed model HAKE belongs to the translational distance models. More specifically, HAKE shares similarities with RotatE BIBREF7, in which the authors claim that they use both modulus and phase information. However, there exist two major differences between RotatE and HAKE. Detailed differences are as follows.
The aims are different. RotatE aims to model the relation patterns including symmetry/antisymmetry, inversion, and composition. HAKE aims to model the semantic hierarchy, while it can also model all the relation patterns mentioned above.
The ways to use modulus information are different. RotatE models relations as rotations in the complex space, which encourages two linked entities to have the same modulus, no matter what the relation is. The different moduli in RotatE come from the inaccuracy in training. Instead, HAKE explicitly models the modulus information, which significantly outperforms RotatE in distinguishing entities at different levels of the hierarchy.
Related Work ::: The Ways to Model Hierarchy Structures
Another related problem is how to model hierarchy structures in knowledge graphs. Some recent work considers the problem in different ways. BIBREF25 embed entities and categories jointly into a semantic space and designs models for the concept categorization and dataless hierarchical classification tasks. BIBREF13 use clustering algorithms to model the hierarchical relation structures. BIBREF12 proposed TKRL, which embeds the type information into knowledge graph embeddings. That is, TKRL requires additional hierarchical type information for entities.
Different from the previous work, our work
considers the link prediction task, which is a more common task for knowledge graph embeddings;
can automatically learn the semantic hierarchy in knowledge graphs without using clustering algorithms;
does not require any additional information other than the triples in knowledge graphs.
The Proposed HAKE
In this section, we introduce our proposed model HAKE. We first introduce two categories of entities that reflect the semantic hierarchies in knowledge graphs. Afterwards, we introduce our proposed HAKE that can model entities in both of the categories.
The Proposed HAKE ::: Two Categories of Entities
To model the semantic hierarchies of knowledge graphs, a knowledge graph embedding model must be capable of distinguishing entities in the following two categories.
Entities at different levels of the hierarchy. For example, “mammal” and “dog”, “run” and ”move”.
Entities at the same level of the hierarchy. For example, “rose” and “peony”, “truck” and ”lorry”.
The Proposed HAKE ::: Hierarchy-Aware Knowledge Graph Embedding
To model both of the above categories, we propose a hierarchy-aware knowledge graph embedding model—HAKE. HAKE consists of two parts—the modulus part and the phase part—which aim to model entities in the two different categories, respectively. Figure FIGREF13 gives an illustration of the proposed model.
To distinguish embeddings in the different parts, we use $\textbf {e}_m$ ($\textbf {e}$ can be $\textbf {h}$ or $\textbf {t}$) and $\textbf {r}_m$ to denote the entity embedding and relation embedding in the modulus part, and use $\textbf {e}_p$ ($\textbf {e}$ can be $\textbf {h}$ or $\textbf {t}$) and $\textbf {r}_p$ to denote the entity embedding and relation embedding in the phase part.
The modulus part aims to model the entities at different levels of the hierarchy. Inspired by the fact that entities that have hierarchical property can be viewed as a tree, we can use the depth of a node (entity) to model different levels of the hierarchy. Therefore, we use modulus information to model entities in the category (a), as moduli can reflect the depth in a tree. Specifically, we regard each entry of $\textbf {h}_m$ and $\textbf {t}_m$, that is, $[\textbf {h}_m]_i$ and $[\textbf {t}_m]_i$, as a modulus, and regard each entry of $\textbf {r}_m$, that is, $[\textbf {r}]_i$, as a scaling transformation between two moduli. We can formulate the modulus part as follows:
The corresponding distance function is:
Note that we allow the entries of entity embeddings to be negative but restrict the entries of relation embeddings to be positive. This is because that the signs of entity embeddings can help us to predict whether there exists a relation between two entities. For example, if there exists a relation $r$ between $h$ and $t_1$, and no relation between $h$ and $t_2$, then $(h, r, t_1)$ is a positive sample and $(h, r, t_2)$ is a negative sample. Our goal is to minimize $d_r(\textbf {h}_m, \textbf {t}_{1,m})$ and maximize $d_r(\textbf {h}_m, \textbf {t}_{2,m})$, so as to make a clear distinction between positive and negative samples. For the positive sample, $[\textbf {h}]_i$ and $[\textbf {t}_1]_i$ tend to share the same sign, as $[\textbf {r}_m]_i>0$. For the negative sample, the signs of $[\textbf {h}_m]_i$ and $[\textbf {t}_{2,m}]_i$ can be different if we initialize their signs randomly. In this way, $d_r(\textbf {h}_m, \textbf {t}_{2,m})$ is more likely to be larger than $d_r(\textbf {h}_m, \textbf {t}_{1,m})$, which is exactly what we desire. We will validate this argument by experiments in Section 4 of the supplementary material.
Further, we can expect the entities at higher levels of the hierarchy to have smaller modulus, as these entities are more close to the root of the tree.
If we use only the modulus part to embed knowledge graphs, then the entities in the category (b) will have the same modulus. Moreover, suppose that $r$ is a relation that reflects the same semantic hierarchy, then $[\textbf {r}]_i$ will tend to be one, as $h\circ r\circ r=h$ holds for all $h$. Hence, embeddings of the entities in the category (b) tend to be the same, which makes it hard to distinguish these entities. Therefore, a new module is required to model the entities in the category (b).
The phase part aims to model the entities at the same level of the semantic hierarchy. Inspired by the fact that points on the same circle (that is, have the same modulus) can have different phases, we use phase information to distinguish entities in the category (b). Specifically, we regard each entry of $\textbf {h}_p$ and $\textbf {t}_p$, that is, $[\textbf {h}_p]_i$ and $[\textbf {t}_p]_i$ as a phase, and regard each entry of $\textbf {r}_p$, that is, $[\textbf {r}_p]_i$, as a phase transformation. We can formulate the phase part as follows:
The corresponding distance function is:
where $\sin (\cdot )$ is an operation that applies the sine function to each element of the input. Note that we use a sine function to measure the distance between phases instead of using $\Vert \textbf {h}_p+\textbf {r}_p-\textbf {t}_p\Vert _1$, as phases have periodic characteristic. This distance function shares the same formulation with that of pRotatE BIBREF7.
Combining the modulus part and the phase part, HAKE maps entities into the polar coordinate system, where the radial coordinate and the angular coordinates correspond to the modulus part and the phase part, respectively. That is, HAKE maps an entity $h$ to $[\textbf {h}_m;\textbf {h}_p]$, where $\textbf {h}_m$ and $\textbf {h}_p$ are generated by the modulus part and the phase part, respectively, and $[\,\cdot \,; \,\cdot \,]$ denotes the concatenation of two vectors. Obviously, $([\textbf {h}_m]_i,[\textbf {h}_p]_i)$ is a 2D point in the polar coordinate system. Specifically, we formulate HAKE as follows:
The distance function of HAKE is:
where $\lambda \in \mathbb {R}$ is a parameter that learned by the model. The corresponding score function is
When two entities have the same moduli, then the modulus part $d_{r,m}(\textbf {h}_m,\textbf {t}_m)=0$. However, the phase part $d_{r,p}(\textbf {h}_p,\textbf {t}_p)$ can be very different. By combining the modulus part and the phase part, HAKE can model the entities in both the category (a) and the category (b). Therefore, HAKE can model semantic hierarchies of knowledge graphs.
When evaluating the models, we find that adding a mixture bias to $d_{r,m}(\textbf {h},\textbf {t})$ can help to improve the performance of HAKE. The modified $d_{r,m}(\textbf {h},\textbf {t})$ is given by:
where $0<\textbf {r}^{\prime }_m<1$ is a vector that have the same dimension with $\textbf {r}_m$. Indeed, the above distance function is equivalent to
where $/$ denotes the element-wise division operation. If we let $\textbf {r}_m\leftarrow (1-\textbf {r}_m^{\prime })/(\textbf {r}_m+\textbf {r}_m^{\prime })$, then the modified distance function is exactly the same as the original one when compare the distances of different entity pairs. For notation convenience, we still use $d_{r,m}(\textbf {h},\textbf {t})=\Vert \textbf {h}_m\circ \textbf {r}_m-\textbf {t}_m\Vert _2$ to represent the modulus part. We will conduct ablation studies on the bias in the experiment section.
The Proposed HAKE ::: Loss Function
To train the model, we use the negative sampling loss functions with self-adversarial training BIBREF7:
where $\gamma $ is a fixed margin, $\sigma $ is the sigmoid function, and $(h^{\prime }_i,r,t^{\prime }_i)$ is the $i$th negative triple. Moreover,
is the probability distribution of sampling negative triples, where $\alpha $ is the temperature of sampling.
Experiments and Analysis
This section is organized as follows. First, we introduce the experimental settings in detail. Then, we show the effectiveness of our proposed model on three benchmark datasets. Finally, we analyze the embeddings generated by HAKE, and show the results of ablation studies. The code of HAKE is available on GitHub at https://github.com/MIRALab-USTC/KGE-HAKE.
Experiments and Analysis ::: Experimental Settings
We evaluate our proposed models on three commonly used knowledge graph datasets—WN18RR BIBREF26, FB15k-237 BIBREF18, and YAGO3-10 BIBREF27. Details of these datasets are summarized in Table TABREF18.
WN18RR, FB15k-237, and YAGO3-10 are subsets of WN18 BIBREF8, FB15k BIBREF8, and YAGO3 BIBREF27, respectively. As pointed out by BIBREF26 and BIBREF18, WN18 and FB15k suffer from the test set leakage problem. One can attain the state-of-the-art results even using a simple rule based model. Therefore, we use WN18RR and FB15k-237 as the benchmark datasets.
Evaluation Protocol Following BIBREF8, for each triple $(h,r,t)$ in the test dataset, we replace either the head entity $h$ or the tail entity $t$ with each candidate entity to create a set of candidate triples. We then rank the candidate triples in descending order by their scores. It is worth noting that we use the “Filtered” setting as in BIBREF8, which does not take any existing valid triples into accounts at ranking. We choose Mean Reciprocal Rank (MRR) and Hits at N (H@N) as the evaluation metrics. Higher MRR or H@N indicate better performance.
Training Protocol We use Adam BIBREF28 as the optimizer, and use grid search to find the best hyperparameters based on the performance on the validation datasets. To make the model easier to train, we add an additional coefficient to the distance function, i.e., $d_{r}(\textbf {h},\textbf {t})=\lambda _1d_{r,m}(\textbf {h}_m,\textbf {t}_m)+\lambda _2 d_{r,p}(\textbf {h}_p,\textbf {t}_p)$, where $\lambda _1,\lambda _2\in \mathbb {R}$.
Baseline Model One may argue that the phase part is unnecessary, as we can distinguish entities in the category (b) by allowing $[\textbf {r}]_i$ to be negative. We propose a model—ModE—that uses only the modulus part but allow $[\textbf {r}]_i<0$. Specifically, the distance function of ModE is
Experiments and Analysis ::: Main Results
In this part, we show the performance of our proposed models—HAKE and ModE—against existing state-of-the-art methods, including TransE BIBREF8, DistMult BIBREF9, ComplEx BIBREF17, ConvE BIBREF18, and RotatE BIBREF7.
Table TABREF19 shows the performance of HAKE, ModE, and several previous models. Our baseline model ModE shares similar simplicity with TransE, but significantly outperforms it on all datasets. Surprisingly, ModE even outperforms more complex models such as DistMult, ConvE and Complex on all datasets, and beats the state-of-the-art model—RotatE—on FB15k-237 and YAGO3-10 datasets, which demonstrates the great power of modulus information. Table TABREF19 also shows that our HAKE significantly outperforms existing state-of-the-art methods on all datasets.
WN18RR dataset consists of two kinds of relations: the symmetric relations such as $\_similar\_to$, which link entities in the category (b); other relations such as $\_hypernym$ and $\_member\_meronym$, which link entities in the category (a). Actually, RotatE can model entities in the category (b) very well BIBREF7. However, HAKE gains a 0.021 higher MRR, a 2.4% higher H@1, and a 2.4% higher H@3 against RotatE, respectively. The superior performance of HAKE compared with RotatE implies that our proposed model can better model different levels in the hierarchy.
FB15k-237 dataset has more complex relation types and fewer entities, compared with WN18RR and YAGO3-10. Although there are relations that reflect hierarchy in FB15k-237, there are also lots of relations, such as “/location/location/time_zones” and “/film/film/prequel”, that do not lead to hierarchy. The characteristic of this dataset accounts for why our proposed models doesn't outperform the previous state-of-the-art as much as that of WN18RR and YAGO3-10 datasets. However, the results also show that our models can gain better performance so long as there exists semantic hierarchies in knowledge graphs. As almost all knowledge graphs have such hierarchy structures, our model is widely applicable.
YAGO3-10 datasets contains entities with high relation-specific indegree BIBREF18. For example, the link prediction task $(?, hasGender, male)$ has over 1000 true answers, which makes the task challenging. Fortunately, we can regard “male” as an entity at higher level of the hierarchy and the predicted head entities as entities at lower level. In this way, YAGO3-10 is a dataset that clearly has semantic hierarchy property, and we can expect that our proposed models is capable of working well on this dataset. Table TABREF19 validates our expectation. Both ModE and HAKE significantly outperform the previous state-of-the-art. Notably, HAKE gains a 0.050 higher MRR, 6.0% higher H@1 and 4.6% higher H@3 than RotatE, respectively.
Experiments and Analysis ::: Analysis on Relation Embeddings
In this part, we first show that HAKE can effectively model the hierarchy structures by analyzing the moduli of relation embeddings. Then, we show that the phase part of HAKE can help us to distinguish entities at the same level of the hierarchy by analyzing the phases of relation embeddings.
In Figure FIGREF20, we plot the distribution histograms of moduli of six relations. These relations are drawn from WN18RR, FB15k-237, and YAGO3-10. Specifically, the relations in Figures FIGREF20a, FIGREF20c, FIGREF20e and FIGREF20f are drawn from WN18RR. The relation in Figure FIGREF20d is drawn from FB15k-237. The relation in Figure FIGREF20b is drawn from YAGO3-10. We divide the relations in Figure FIGREF20 into three groups.
Relations in Figures FIGREF20c and FIGREF20d connect the entities at the same level of the semantic hierarchy;
Relations in Figures FIGREF20a and FIGREF20b represent that tail entities are at higher levels than head entities of the hierarchy;
Relations in Figures FIGREF20e and FIGREF20f represent that tail entities are at lower levels than head entities of the hierarchy.
As described in the model description section, we expect entities at higher levels of the hierarchy to have small moduli. The experiments validate our expectation. For both ModE and HAKE, most entries of the relations in the group (A) take values around one, which leads to that the head entities and tail entities have approximately the same moduli. In the group (B), most entries of the relations take values less than one, which results in that the head entities have smaller moduli than the tail entities. The cases in the group (C) are contrary to that in the group (B). These results show that our model can capture the semantic hierarchies in knowledge graphs. Moreover, compared with ModE, the relation embeddings' moduli of HAKE have lower variances, which shows that HAKE can model hierarchies more clearly.
As mentioned above, relations in the group (A) reflect the same semantic hierarchy, and are expected to have the moduli of about one. Obviously, it is hard to distinguish entities linked by these relations only using the modulus part. In Figure FIGREF22, we plot the phases of the relations in the group (A). The results show that the entities at the same level of the hierarchy can be distinguished by their phases, as many phases have the values of $\pi $.
Experiments and Analysis ::: Analysis on Entity Embeddings
In this part, to further show that HAKE can capture the semantic hierarchies between entities, we visualize the embeddings of several entity pairs.
We plot the entity embeddings of two models: the previous state-of-the-art RotatE and our proposed HAKE. RotatE regards each entity as a group of complex numbers. As a complex number can be seen as a point on a 2D plane, we can plot the entity embeddings on a 2D plane. As for HAKE, we have mentioned that it maps entities into the polar coordinate system. Therefore, we can also plot the entity embeddings generated by HAKE on a 2D plane based on their polar coordinates. For a fair comparison, we set $k=500$. That is, each plot contains 500 points, and the actual dimension of entity embeddings is 1000. Note that we use the logarithmic scale to better display the differences between entity embeddings. As all the moduli have values less than one, after applying the logarithm operation, the larger radii in the figures will actually represent smaller modulus.
Figure FIGREF29 shows the visualization results of three triples from the WN18RR dataset. Compared with the tail entities, the head entities in Figures FIGREF29a, FIGREF29b, and FIGREF29c are at lower levels, similar levels, higher levels in the semantic hierarchy, respectively. We can see that there exist clear concentric circles in the visualization results of HAKE, which demonstrates that HAKE can effectively model the semantic hierarchies. However, in RotatE, the entity embeddings in all three subfigures are mixed, making it hard to distinguish entities at different levels in the hierarchy.
Experiments and Analysis ::: Ablation Studies
In this part, we conduct ablation studies on the modulus part and the phase part of HAKE, as well as the mixture bias item. Table TABREF26 shows the results on three benchmark datasets.
We can see that the bias can improve the performance of HAKE on nearly all metrics. Specifically, the bias improves the H@1 score of $4.7\%$ on YAGO3-10 dataset, which illustrates the effectiveness of the bias.
We also observe that the modulus part of HAKE does not perform well on all datasets, due to its inability to distinguish the entities at the same level of the hierarchy. When only using the phase part, HAKE degenerates to the pRotatE model BIBREF7. It performs better than the modulus part, because it can well model entities at the same level of the hierarchy. However, our HAKE model significantly outperforms the modulus part and the phase part on all datasets, which demonstrates the importance to combine the two parts for modeling semantic hierarchies in knowledge graphs.
Experiments and Analysis ::: Comparison with Other Related Work
We compare our models with TKRL models BIBREF12, which also aim to model the hierarchy structures. For the difference between HAKE and TKRL, please refer to the Related Work section. Table TABREF27 shows the H@10 scores of HAKE and TKRLs on FB15k dataset. The best performance of TKRL is .734 obtained by the WHE+STC version, while the H@10 score of our HAKE model is .884. The results show that HAKE significantly outperforms TKRL, though it does not require additional information.
Conclusion
To model the semantic hierarchies in knowledge graphs, we propose a novel hierarchy-aware knowledge graph embedding model—HAKE—which maps entities into the polar coordinate system. Experiments show that our proposed HAKE significantly outperforms several existing state-of-the-art methods on benchmark datasets for the link prediction task. A further investigation shows that HAKE is capable of modeling entities at both different levels and the same levels in the semantic hierarchies.
Appendix
In this appendix, we will provide analysis on relation patterns, negative entity embeddings, and moduli of entity embeddings. Then, we will give more visualization results on semantic hierarchies.
A. Analysis on Relation Patterns
In this section, we prove that our HAKE model can infer the (anti)symmetry, inversion and composition relation patterns. Detailed propositions and their proofs are as follows.
Proposition 1 HAKE can infer the (anti)symmetry pattern.
If $r(x, y)$ and $r(y, x)$ hold, we have
Then we have
Otherwise, if $r(x, y)$ and $\lnot r(y, x)$ hold, we have
Proposition 2 HAKE can infer the inversion pattern.
If $r_1(x, y)$ and $r_2(y, x)$ hold, we have
Then, we have
Proposition 3 HAKE can infer the composition pattern.
If $r_1(x, z)$, $r_2(x, y)$ and $r_3(y, z)$ hold, we have
Then we have
B. Analysis on Negative Entity Embeddings
We denote the linked entity pairs as the set of entity pairs linked by some relation, and denote the unlinked entity pairs as the set of entity pairs that no triple contains in the train/valid/test dataset. It is worth noting that the unlinked paris may contain valid triples, as the knowledge graph is incomplete. For both the linked and the unlinked entity pairs, we count the embedding entries of two entities that have different signs. Figure FIGREF34 shows the result.
For the linked entity pairs, as we expected, most of the entries have the same sign. Due to the large amount of unlinked entity pairs, we randomly sample a part of them for plotting. For the unlinked entity pairs, around half of the entries have different signs, which is consistent with the random initialization. The results support our hypothesis that the negative signs of entity embeddings can help our model to distinguish positive and negative triples.
C. Analysis on Moduli of Entity Embeddings
Figure FIGREF37 shows the modulus of entity embeddings. We can observe that RotatE encourages the modulus of embeddings to be the same, as the relations are modeled as rotations in a complex space. Compared with RotatE, the modulus of entity embeddings in HAKE are more dispersed, making it to have more potential to model the semantic hierarchies.
D. More Results on Semantic Hierarchies
In this part, we visualize more triples from WN18RR. We plot the head and tail entities on 2D planes using the same method as that in the main text. The visualization results are in Figure FIGREF41, where the subcaptions demonstrate the corresponding triples. The figures show that, compared with RotatE, our HAKE model can better model the entities both in different hierarchies and in the same hierarchy. | WN18RR BIBREF26, FB15k-237 BIBREF18, YAGO3-10 BIBREF27 |
cd1ad7e18d8eef8f67224ce47f3feec02718ea1a | cd1ad7e18d8eef8f67224ce47f3feec02718ea1a_0 | Q: What are state-of-the art models for this task?
Text: Introduction
Knowledge graphs are usually collections of factual triples—(head entity, relation, tail entity), which represent human knowledge in a structured way. In the past few years, we have witnessed the great achievement of knowledge graphs in many areas, such as natural language processing BIBREF0, question answering BIBREF1, and recommendation systems BIBREF2.
Although commonly used knowledge graphs contain billions of triples, they still suffer from the incompleteness problem that a lot of valid triples are missing, as it is impractical to find all valid triples manually. Therefore, knowledge graph completion, also known as link prediction in knowledge graphs, has attracted much attention recently. Link prediction aims to automatically predict missing links between entities based on known links. It is a challenging task as we not only need to predict whether there is a relation between two entities, but also need to determine which relation it is.
Inspired by word embeddings BIBREF3 that can well capture semantic meaning of words, researchers turn to distributed representations of knowledge graphs (aka, knowledge graph embeddings) to deal with the link prediction problem. Knowledge graph embeddings regard entities and relations as low dimensional vectors (or matrices, tensors), which can be stored and computed efficiently. Moreover, like in the case of word embeddings, knowledge graph embeddings can preserve the semantics and inherent structures of entities and relations. Therefore, other than the link prediction task, knowledge graph embeddings can also be used in various downstream tasks, such as triple classification BIBREF4, relation inference BIBREF5, and search personalization BIBREF6.
The success of existing knowledge graph embedding models heavily relies on their ability to model connectivity patterns of the relations, such as symmetry/antisymmetry, inversion, and composition BIBREF7. For example, TransE BIBREF8, which represent relations as translations, can model the inversion and composition patterns. DistMult BIBREF9, which models the three-way interactions between head entities, relations, and tail entities, can model the symmetry pattern. RotatE BIBREF7, which represents entities as points in a complex space and relations as rotations, can model relation patterns including symmetry/antisymmetry, inversion, and composition. However, many existing models fail to model semantic hierarchies in knowledge graphs.
Semantic hierarchy is a ubiquitous property in knowledge graphs. For instance, WordNet BIBREF10 contains the triple [arbor/cassia/palm, hypernym, tree], where “tree” is at a higher level than “arbor/cassia/palm” in the hierarchy. Freebase BIBREF11 contains the triple [England, /location/location/contains, Pontefract/Lancaster], where “Pontefract/Lancaster” is at a lower level than “England” in the hierarchy. Although there exists some work that takes the hierarchy structures into account BIBREF12, BIBREF13, they usually require additional data or process to obtain the hierarchy information. Therefore, it is still challenging to find an approach that is capable of modeling the semantic hierarchy automatically and effectively.
In this paper, we propose a novel knowledge graph embedding model—namely, Hierarchy-Aware Knowledge Graph Embedding (HAKE). To model the semantic hierarchies, HAKE is expected to distinguish entities in two categories: (a) at different levels of the hierarchy; (b) at the same level of the hierarchy. Inspired by the fact that entities that have the hierarchical properties can be viewed as a tree, we can use the depth of a node (entity) to model different levels of the hierarchy. Thus, we use modulus information to model entities in the category (a), as the size of moduli can reflect the depth. Under the above settings, entities in the category (b) will have roughly the same modulus, which is hard to distinguish. Inspired by the fact that the points on the same circle can have different phases, we use phase information to model entities in the category (b). Combining the modulus and phase information, HAKE maps entities into the polar coordinate system, where the radial coordinate corresponds to the modulus information and the angular coordinate corresponds to the phase information. Experiments show that our proposed HAKE model can not only clearly distinguish the semantic hierarchies of entities, but also significantly and consistently outperform several state-of-the-art methods on the benchmark datasets.
Notations Throughout this paper, we use lower-case letters $h$, $r$, and $t$ to represent head entities, relations, and tail entities, respectively. The triplet $(h,r,t)$ denotes a fact in knowledge graphs. The corresponding boldface lower-case letters $\textbf {h}$, $\textbf {r}$ and $\textbf {t}$ denote the embeddings (vectors) of head entities, relations, and tail entities. The $i$-th entry of a vector $\textbf {h}$ is denoted as $[\textbf {h}]_i$. Let $k$ denote the embedding dimension.
Let $\circ :\mathbb {R}^n\times \mathbb {R}^n\rightarrow \mathbb {R}^n$ denote the Hadamard product between two vectors, that is,
and $\Vert \cdot \Vert _1$, $\Vert \cdot \Vert _2$ denote the $\ell _1$ and $\ell _2$ norm, respectively.
Related Work
In this section, we will describe the related work and the key differences between them and our work in two aspects—the model category and the way to model hierarchy structures in knowledge graphs.
Related Work ::: Model Category
Roughly speaking, we can divide knowledge graph embedding models into three categories—translational distance models, bilinear models, and neural network based models. Table TABREF2 exhibits several popular models.
Translational distance models describe relations as translations from source entities to target entities. TransE BIBREF8 supposes that entities and relations satisfy $\textbf {h}+\textbf {r}\approx \textbf {t}$, where $\textbf {h}, \textbf {r}, \textbf {t} \in \mathbb {R}^n$, and defines the corresponding score function as $f_r(\textbf {h},\textbf {t})=-\Vert \textbf {h}+\textbf {r}-\textbf {t}\Vert _{1/2}$. However, TransE does not perform well on 1-N, N-1 and N-N relations BIBREF14. TransH BIBREF14 overcomes the many-to-many relation problem by allowing entities to have distinct representations given different relations. The score function is defined as $f_r(\textbf {h},\textbf {t})=-\Vert \textbf {h}_{\perp }+\textbf {r}-\textbf {t}_{\perp }\Vert _2$, where $\textbf {h}_{\perp }$ and $\textbf {t}_{\perp }$ are the projections of entities onto relation-specific hyperplanes. ManifoldE BIBREF15 deals with many-to-many problems by relaxing the hypothesis $\textbf {h}+\textbf {r}\approx \textbf {t}$ to $\Vert \textbf {h}+\textbf {r}-\textbf {t}\Vert _2^2\approx \theta _r^2$ for each valid triple. In this way, the candidate entities can lie on a manifold instead of exact point. The corresponding score function is defined as $f_r(\textbf {h},\textbf {t})=-(\Vert \textbf {h}+\textbf {r}-\textbf {t}\Vert _2^2-\theta _r^2)^2$. More recently, to better model symmetric and antisymmetric relations, RotatE BIBREF7 defines each relation as a rotation from source entities to target entities in a complex vector space. The score function is defined as $f_r(\textbf {h},\textbf {t})=-\Vert \textbf {h}\circ \textbf {r}-\textbf {t}\Vert _1$, where $\textbf {h},\textbf {r},\textbf {t}\in \mathbb {C}^k$ and $|[\textbf {r}]_i|=1$.
Bilinear models product-based score functions to match latent semantics of entities and relations embodied in their vector space representations. RESCAL BIBREF16 represents each relation as a full rank matrix, and defines the score function as $f_r(\textbf {h},\textbf {t})=\textbf {h}^\top \textbf {M}_r \textbf {t}$, which can also be seen as a bilinear function. As full rank matrices are prone to overfitting, recent works turn to make additional assumptions on $\textbf {M}_r$. For example, DistMult BIBREF9 assumes $\textbf {M}_r$ to be a diagonal matrix, and ANALOGY BIBREF19 supposes that $\textbf {M}_r$ is normal. However, these simplified models are usually less expressive and not powerful enough for general knowledge graphs. Differently, ComplEx BIBREF17 extends DistMult by introducing complex-valued embeddings to better model asymmetric and inverse relations. HolE BIBREF20 combines the expressive power of RESCAL with the efficiency and simplicity of DistMult by using the circular correlation operation.
Neural network based models have received greater attention in recent years. For example, MLP BIBREF21 and NTN BIBREF22 use a fully connected neural network to determine the scores of given triples. ConvE BIBREF18 and ConvKB BIBREF23 employ convolutional neural networks to define score functions. Recently, graph convolutional networks are also introduced, as knowledge graphs obviously have graph structures BIBREF24.
Our proposed model HAKE belongs to the translational distance models. More specifically, HAKE shares similarities with RotatE BIBREF7, in which the authors claim that they use both modulus and phase information. However, there exist two major differences between RotatE and HAKE. Detailed differences are as follows.
The aims are different. RotatE aims to model the relation patterns including symmetry/antisymmetry, inversion, and composition. HAKE aims to model the semantic hierarchy, while it can also model all the relation patterns mentioned above.
The ways to use modulus information are different. RotatE models relations as rotations in the complex space, which encourages two linked entities to have the same modulus, no matter what the relation is. The different moduli in RotatE come from the inaccuracy in training. Instead, HAKE explicitly models the modulus information, which significantly outperforms RotatE in distinguishing entities at different levels of the hierarchy.
Related Work ::: The Ways to Model Hierarchy Structures
Another related problem is how to model hierarchy structures in knowledge graphs. Some recent work considers the problem in different ways. BIBREF25 embed entities and categories jointly into a semantic space and designs models for the concept categorization and dataless hierarchical classification tasks. BIBREF13 use clustering algorithms to model the hierarchical relation structures. BIBREF12 proposed TKRL, which embeds the type information into knowledge graph embeddings. That is, TKRL requires additional hierarchical type information for entities.
Different from the previous work, our work
considers the link prediction task, which is a more common task for knowledge graph embeddings;
can automatically learn the semantic hierarchy in knowledge graphs without using clustering algorithms;
does not require any additional information other than the triples in knowledge graphs.
The Proposed HAKE
In this section, we introduce our proposed model HAKE. We first introduce two categories of entities that reflect the semantic hierarchies in knowledge graphs. Afterwards, we introduce our proposed HAKE that can model entities in both of the categories.
The Proposed HAKE ::: Two Categories of Entities
To model the semantic hierarchies of knowledge graphs, a knowledge graph embedding model must be capable of distinguishing entities in the following two categories.
Entities at different levels of the hierarchy. For example, “mammal” and “dog”, “run” and ”move”.
Entities at the same level of the hierarchy. For example, “rose” and “peony”, “truck” and ”lorry”.
The Proposed HAKE ::: Hierarchy-Aware Knowledge Graph Embedding
To model both of the above categories, we propose a hierarchy-aware knowledge graph embedding model—HAKE. HAKE consists of two parts—the modulus part and the phase part—which aim to model entities in the two different categories, respectively. Figure FIGREF13 gives an illustration of the proposed model.
To distinguish embeddings in the different parts, we use $\textbf {e}_m$ ($\textbf {e}$ can be $\textbf {h}$ or $\textbf {t}$) and $\textbf {r}_m$ to denote the entity embedding and relation embedding in the modulus part, and use $\textbf {e}_p$ ($\textbf {e}$ can be $\textbf {h}$ or $\textbf {t}$) and $\textbf {r}_p$ to denote the entity embedding and relation embedding in the phase part.
The modulus part aims to model the entities at different levels of the hierarchy. Inspired by the fact that entities that have hierarchical property can be viewed as a tree, we can use the depth of a node (entity) to model different levels of the hierarchy. Therefore, we use modulus information to model entities in the category (a), as moduli can reflect the depth in a tree. Specifically, we regard each entry of $\textbf {h}_m$ and $\textbf {t}_m$, that is, $[\textbf {h}_m]_i$ and $[\textbf {t}_m]_i$, as a modulus, and regard each entry of $\textbf {r}_m$, that is, $[\textbf {r}]_i$, as a scaling transformation between two moduli. We can formulate the modulus part as follows:
The corresponding distance function is:
Note that we allow the entries of entity embeddings to be negative but restrict the entries of relation embeddings to be positive. This is because that the signs of entity embeddings can help us to predict whether there exists a relation between two entities. For example, if there exists a relation $r$ between $h$ and $t_1$, and no relation between $h$ and $t_2$, then $(h, r, t_1)$ is a positive sample and $(h, r, t_2)$ is a negative sample. Our goal is to minimize $d_r(\textbf {h}_m, \textbf {t}_{1,m})$ and maximize $d_r(\textbf {h}_m, \textbf {t}_{2,m})$, so as to make a clear distinction between positive and negative samples. For the positive sample, $[\textbf {h}]_i$ and $[\textbf {t}_1]_i$ tend to share the same sign, as $[\textbf {r}_m]_i>0$. For the negative sample, the signs of $[\textbf {h}_m]_i$ and $[\textbf {t}_{2,m}]_i$ can be different if we initialize their signs randomly. In this way, $d_r(\textbf {h}_m, \textbf {t}_{2,m})$ is more likely to be larger than $d_r(\textbf {h}_m, \textbf {t}_{1,m})$, which is exactly what we desire. We will validate this argument by experiments in Section 4 of the supplementary material.
Further, we can expect the entities at higher levels of the hierarchy to have smaller modulus, as these entities are more close to the root of the tree.
If we use only the modulus part to embed knowledge graphs, then the entities in the category (b) will have the same modulus. Moreover, suppose that $r$ is a relation that reflects the same semantic hierarchy, then $[\textbf {r}]_i$ will tend to be one, as $h\circ r\circ r=h$ holds for all $h$. Hence, embeddings of the entities in the category (b) tend to be the same, which makes it hard to distinguish these entities. Therefore, a new module is required to model the entities in the category (b).
The phase part aims to model the entities at the same level of the semantic hierarchy. Inspired by the fact that points on the same circle (that is, have the same modulus) can have different phases, we use phase information to distinguish entities in the category (b). Specifically, we regard each entry of $\textbf {h}_p$ and $\textbf {t}_p$, that is, $[\textbf {h}_p]_i$ and $[\textbf {t}_p]_i$ as a phase, and regard each entry of $\textbf {r}_p$, that is, $[\textbf {r}_p]_i$, as a phase transformation. We can formulate the phase part as follows:
The corresponding distance function is:
where $\sin (\cdot )$ is an operation that applies the sine function to each element of the input. Note that we use a sine function to measure the distance between phases instead of using $\Vert \textbf {h}_p+\textbf {r}_p-\textbf {t}_p\Vert _1$, as phases have periodic characteristic. This distance function shares the same formulation with that of pRotatE BIBREF7.
Combining the modulus part and the phase part, HAKE maps entities into the polar coordinate system, where the radial coordinate and the angular coordinates correspond to the modulus part and the phase part, respectively. That is, HAKE maps an entity $h$ to $[\textbf {h}_m;\textbf {h}_p]$, where $\textbf {h}_m$ and $\textbf {h}_p$ are generated by the modulus part and the phase part, respectively, and $[\,\cdot \,; \,\cdot \,]$ denotes the concatenation of two vectors. Obviously, $([\textbf {h}_m]_i,[\textbf {h}_p]_i)$ is a 2D point in the polar coordinate system. Specifically, we formulate HAKE as follows:
The distance function of HAKE is:
where $\lambda \in \mathbb {R}$ is a parameter that learned by the model. The corresponding score function is
When two entities have the same moduli, then the modulus part $d_{r,m}(\textbf {h}_m,\textbf {t}_m)=0$. However, the phase part $d_{r,p}(\textbf {h}_p,\textbf {t}_p)$ can be very different. By combining the modulus part and the phase part, HAKE can model the entities in both the category (a) and the category (b). Therefore, HAKE can model semantic hierarchies of knowledge graphs.
When evaluating the models, we find that adding a mixture bias to $d_{r,m}(\textbf {h},\textbf {t})$ can help to improve the performance of HAKE. The modified $d_{r,m}(\textbf {h},\textbf {t})$ is given by:
where $0<\textbf {r}^{\prime }_m<1$ is a vector that have the same dimension with $\textbf {r}_m$. Indeed, the above distance function is equivalent to
where $/$ denotes the element-wise division operation. If we let $\textbf {r}_m\leftarrow (1-\textbf {r}_m^{\prime })/(\textbf {r}_m+\textbf {r}_m^{\prime })$, then the modified distance function is exactly the same as the original one when compare the distances of different entity pairs. For notation convenience, we still use $d_{r,m}(\textbf {h},\textbf {t})=\Vert \textbf {h}_m\circ \textbf {r}_m-\textbf {t}_m\Vert _2$ to represent the modulus part. We will conduct ablation studies on the bias in the experiment section.
The Proposed HAKE ::: Loss Function
To train the model, we use the negative sampling loss functions with self-adversarial training BIBREF7:
where $\gamma $ is a fixed margin, $\sigma $ is the sigmoid function, and $(h^{\prime }_i,r,t^{\prime }_i)$ is the $i$th negative triple. Moreover,
is the probability distribution of sampling negative triples, where $\alpha $ is the temperature of sampling.
Experiments and Analysis
This section is organized as follows. First, we introduce the experimental settings in detail. Then, we show the effectiveness of our proposed model on three benchmark datasets. Finally, we analyze the embeddings generated by HAKE, and show the results of ablation studies. The code of HAKE is available on GitHub at https://github.com/MIRALab-USTC/KGE-HAKE.
Experiments and Analysis ::: Experimental Settings
We evaluate our proposed models on three commonly used knowledge graph datasets—WN18RR BIBREF26, FB15k-237 BIBREF18, and YAGO3-10 BIBREF27. Details of these datasets are summarized in Table TABREF18.
WN18RR, FB15k-237, and YAGO3-10 are subsets of WN18 BIBREF8, FB15k BIBREF8, and YAGO3 BIBREF27, respectively. As pointed out by BIBREF26 and BIBREF18, WN18 and FB15k suffer from the test set leakage problem. One can attain the state-of-the-art results even using a simple rule based model. Therefore, we use WN18RR and FB15k-237 as the benchmark datasets.
Evaluation Protocol Following BIBREF8, for each triple $(h,r,t)$ in the test dataset, we replace either the head entity $h$ or the tail entity $t$ with each candidate entity to create a set of candidate triples. We then rank the candidate triples in descending order by their scores. It is worth noting that we use the “Filtered” setting as in BIBREF8, which does not take any existing valid triples into accounts at ranking. We choose Mean Reciprocal Rank (MRR) and Hits at N (H@N) as the evaluation metrics. Higher MRR or H@N indicate better performance.
Training Protocol We use Adam BIBREF28 as the optimizer, and use grid search to find the best hyperparameters based on the performance on the validation datasets. To make the model easier to train, we add an additional coefficient to the distance function, i.e., $d_{r}(\textbf {h},\textbf {t})=\lambda _1d_{r,m}(\textbf {h}_m,\textbf {t}_m)+\lambda _2 d_{r,p}(\textbf {h}_p,\textbf {t}_p)$, where $\lambda _1,\lambda _2\in \mathbb {R}$.
Baseline Model One may argue that the phase part is unnecessary, as we can distinguish entities in the category (b) by allowing $[\textbf {r}]_i$ to be negative. We propose a model—ModE—that uses only the modulus part but allow $[\textbf {r}]_i<0$. Specifically, the distance function of ModE is
Experiments and Analysis ::: Main Results
In this part, we show the performance of our proposed models—HAKE and ModE—against existing state-of-the-art methods, including TransE BIBREF8, DistMult BIBREF9, ComplEx BIBREF17, ConvE BIBREF18, and RotatE BIBREF7.
Table TABREF19 shows the performance of HAKE, ModE, and several previous models. Our baseline model ModE shares similar simplicity with TransE, but significantly outperforms it on all datasets. Surprisingly, ModE even outperforms more complex models such as DistMult, ConvE and Complex on all datasets, and beats the state-of-the-art model—RotatE—on FB15k-237 and YAGO3-10 datasets, which demonstrates the great power of modulus information. Table TABREF19 also shows that our HAKE significantly outperforms existing state-of-the-art methods on all datasets.
WN18RR dataset consists of two kinds of relations: the symmetric relations such as $\_similar\_to$, which link entities in the category (b); other relations such as $\_hypernym$ and $\_member\_meronym$, which link entities in the category (a). Actually, RotatE can model entities in the category (b) very well BIBREF7. However, HAKE gains a 0.021 higher MRR, a 2.4% higher H@1, and a 2.4% higher H@3 against RotatE, respectively. The superior performance of HAKE compared with RotatE implies that our proposed model can better model different levels in the hierarchy.
FB15k-237 dataset has more complex relation types and fewer entities, compared with WN18RR and YAGO3-10. Although there are relations that reflect hierarchy in FB15k-237, there are also lots of relations, such as “/location/location/time_zones” and “/film/film/prequel”, that do not lead to hierarchy. The characteristic of this dataset accounts for why our proposed models doesn't outperform the previous state-of-the-art as much as that of WN18RR and YAGO3-10 datasets. However, the results also show that our models can gain better performance so long as there exists semantic hierarchies in knowledge graphs. As almost all knowledge graphs have such hierarchy structures, our model is widely applicable.
YAGO3-10 datasets contains entities with high relation-specific indegree BIBREF18. For example, the link prediction task $(?, hasGender, male)$ has over 1000 true answers, which makes the task challenging. Fortunately, we can regard “male” as an entity at higher level of the hierarchy and the predicted head entities as entities at lower level. In this way, YAGO3-10 is a dataset that clearly has semantic hierarchy property, and we can expect that our proposed models is capable of working well on this dataset. Table TABREF19 validates our expectation. Both ModE and HAKE significantly outperform the previous state-of-the-art. Notably, HAKE gains a 0.050 higher MRR, 6.0% higher H@1 and 4.6% higher H@3 than RotatE, respectively.
Experiments and Analysis ::: Analysis on Relation Embeddings
In this part, we first show that HAKE can effectively model the hierarchy structures by analyzing the moduli of relation embeddings. Then, we show that the phase part of HAKE can help us to distinguish entities at the same level of the hierarchy by analyzing the phases of relation embeddings.
In Figure FIGREF20, we plot the distribution histograms of moduli of six relations. These relations are drawn from WN18RR, FB15k-237, and YAGO3-10. Specifically, the relations in Figures FIGREF20a, FIGREF20c, FIGREF20e and FIGREF20f are drawn from WN18RR. The relation in Figure FIGREF20d is drawn from FB15k-237. The relation in Figure FIGREF20b is drawn from YAGO3-10. We divide the relations in Figure FIGREF20 into three groups.
Relations in Figures FIGREF20c and FIGREF20d connect the entities at the same level of the semantic hierarchy;
Relations in Figures FIGREF20a and FIGREF20b represent that tail entities are at higher levels than head entities of the hierarchy;
Relations in Figures FIGREF20e and FIGREF20f represent that tail entities are at lower levels than head entities of the hierarchy.
As described in the model description section, we expect entities at higher levels of the hierarchy to have small moduli. The experiments validate our expectation. For both ModE and HAKE, most entries of the relations in the group (A) take values around one, which leads to that the head entities and tail entities have approximately the same moduli. In the group (B), most entries of the relations take values less than one, which results in that the head entities have smaller moduli than the tail entities. The cases in the group (C) are contrary to that in the group (B). These results show that our model can capture the semantic hierarchies in knowledge graphs. Moreover, compared with ModE, the relation embeddings' moduli of HAKE have lower variances, which shows that HAKE can model hierarchies more clearly.
As mentioned above, relations in the group (A) reflect the same semantic hierarchy, and are expected to have the moduli of about one. Obviously, it is hard to distinguish entities linked by these relations only using the modulus part. In Figure FIGREF22, we plot the phases of the relations in the group (A). The results show that the entities at the same level of the hierarchy can be distinguished by their phases, as many phases have the values of $\pi $.
Experiments and Analysis ::: Analysis on Entity Embeddings
In this part, to further show that HAKE can capture the semantic hierarchies between entities, we visualize the embeddings of several entity pairs.
We plot the entity embeddings of two models: the previous state-of-the-art RotatE and our proposed HAKE. RotatE regards each entity as a group of complex numbers. As a complex number can be seen as a point on a 2D plane, we can plot the entity embeddings on a 2D plane. As for HAKE, we have mentioned that it maps entities into the polar coordinate system. Therefore, we can also plot the entity embeddings generated by HAKE on a 2D plane based on their polar coordinates. For a fair comparison, we set $k=500$. That is, each plot contains 500 points, and the actual dimension of entity embeddings is 1000. Note that we use the logarithmic scale to better display the differences between entity embeddings. As all the moduli have values less than one, after applying the logarithm operation, the larger radii in the figures will actually represent smaller modulus.
Figure FIGREF29 shows the visualization results of three triples from the WN18RR dataset. Compared with the tail entities, the head entities in Figures FIGREF29a, FIGREF29b, and FIGREF29c are at lower levels, similar levels, higher levels in the semantic hierarchy, respectively. We can see that there exist clear concentric circles in the visualization results of HAKE, which demonstrates that HAKE can effectively model the semantic hierarchies. However, in RotatE, the entity embeddings in all three subfigures are mixed, making it hard to distinguish entities at different levels in the hierarchy.
Experiments and Analysis ::: Ablation Studies
In this part, we conduct ablation studies on the modulus part and the phase part of HAKE, as well as the mixture bias item. Table TABREF26 shows the results on three benchmark datasets.
We can see that the bias can improve the performance of HAKE on nearly all metrics. Specifically, the bias improves the H@1 score of $4.7\%$ on YAGO3-10 dataset, which illustrates the effectiveness of the bias.
We also observe that the modulus part of HAKE does not perform well on all datasets, due to its inability to distinguish the entities at the same level of the hierarchy. When only using the phase part, HAKE degenerates to the pRotatE model BIBREF7. It performs better than the modulus part, because it can well model entities at the same level of the hierarchy. However, our HAKE model significantly outperforms the modulus part and the phase part on all datasets, which demonstrates the importance to combine the two parts for modeling semantic hierarchies in knowledge graphs.
Experiments and Analysis ::: Comparison with Other Related Work
We compare our models with TKRL models BIBREF12, which also aim to model the hierarchy structures. For the difference between HAKE and TKRL, please refer to the Related Work section. Table TABREF27 shows the H@10 scores of HAKE and TKRLs on FB15k dataset. The best performance of TKRL is .734 obtained by the WHE+STC version, while the H@10 score of our HAKE model is .884. The results show that HAKE significantly outperforms TKRL, though it does not require additional information.
Conclusion
To model the semantic hierarchies in knowledge graphs, we propose a novel hierarchy-aware knowledge graph embedding model—HAKE—which maps entities into the polar coordinate system. Experiments show that our proposed HAKE significantly outperforms several existing state-of-the-art methods on benchmark datasets for the link prediction task. A further investigation shows that HAKE is capable of modeling entities at both different levels and the same levels in the semantic hierarchies.
Appendix
In this appendix, we will provide analysis on relation patterns, negative entity embeddings, and moduli of entity embeddings. Then, we will give more visualization results on semantic hierarchies.
A. Analysis on Relation Patterns
In this section, we prove that our HAKE model can infer the (anti)symmetry, inversion and composition relation patterns. Detailed propositions and their proofs are as follows.
Proposition 1 HAKE can infer the (anti)symmetry pattern.
If $r(x, y)$ and $r(y, x)$ hold, we have
Then we have
Otherwise, if $r(x, y)$ and $\lnot r(y, x)$ hold, we have
Proposition 2 HAKE can infer the inversion pattern.
If $r_1(x, y)$ and $r_2(y, x)$ hold, we have
Then, we have
Proposition 3 HAKE can infer the composition pattern.
If $r_1(x, z)$, $r_2(x, y)$ and $r_3(y, z)$ hold, we have
Then we have
B. Analysis on Negative Entity Embeddings
We denote the linked entity pairs as the set of entity pairs linked by some relation, and denote the unlinked entity pairs as the set of entity pairs that no triple contains in the train/valid/test dataset. It is worth noting that the unlinked paris may contain valid triples, as the knowledge graph is incomplete. For both the linked and the unlinked entity pairs, we count the embedding entries of two entities that have different signs. Figure FIGREF34 shows the result.
For the linked entity pairs, as we expected, most of the entries have the same sign. Due to the large amount of unlinked entity pairs, we randomly sample a part of them for plotting. For the unlinked entity pairs, around half of the entries have different signs, which is consistent with the random initialization. The results support our hypothesis that the negative signs of entity embeddings can help our model to distinguish positive and negative triples.
C. Analysis on Moduli of Entity Embeddings
Figure FIGREF37 shows the modulus of entity embeddings. We can observe that RotatE encourages the modulus of embeddings to be the same, as the relations are modeled as rotations in a complex space. Compared with RotatE, the modulus of entity embeddings in HAKE are more dispersed, making it to have more potential to model the semantic hierarchies.
D. More Results on Semantic Hierarchies
In this part, we visualize more triples from WN18RR. We plot the head and tail entities on 2D planes using the same method as that in the main text. The visualization results are in Figure FIGREF41, where the subcaptions demonstrate the corresponding triples. The figures show that, compared with RotatE, our HAKE model can better model the entities both in different hierarchies and in the same hierarchy. | TransE, DistMult, ComplEx, ConvE, RotatE |
9c9e90ceaba33242342a5ae7568e89fe660270d5 | 9c9e90ceaba33242342a5ae7568e89fe660270d5_0 | Q: How better does HAKE model peform than state-of-the-art methods?
Text: Introduction
Knowledge graphs are usually collections of factual triples—(head entity, relation, tail entity), which represent human knowledge in a structured way. In the past few years, we have witnessed the great achievement of knowledge graphs in many areas, such as natural language processing BIBREF0, question answering BIBREF1, and recommendation systems BIBREF2.
Although commonly used knowledge graphs contain billions of triples, they still suffer from the incompleteness problem that a lot of valid triples are missing, as it is impractical to find all valid triples manually. Therefore, knowledge graph completion, also known as link prediction in knowledge graphs, has attracted much attention recently. Link prediction aims to automatically predict missing links between entities based on known links. It is a challenging task as we not only need to predict whether there is a relation between two entities, but also need to determine which relation it is.
Inspired by word embeddings BIBREF3 that can well capture semantic meaning of words, researchers turn to distributed representations of knowledge graphs (aka, knowledge graph embeddings) to deal with the link prediction problem. Knowledge graph embeddings regard entities and relations as low dimensional vectors (or matrices, tensors), which can be stored and computed efficiently. Moreover, like in the case of word embeddings, knowledge graph embeddings can preserve the semantics and inherent structures of entities and relations. Therefore, other than the link prediction task, knowledge graph embeddings can also be used in various downstream tasks, such as triple classification BIBREF4, relation inference BIBREF5, and search personalization BIBREF6.
The success of existing knowledge graph embedding models heavily relies on their ability to model connectivity patterns of the relations, such as symmetry/antisymmetry, inversion, and composition BIBREF7. For example, TransE BIBREF8, which represent relations as translations, can model the inversion and composition patterns. DistMult BIBREF9, which models the three-way interactions between head entities, relations, and tail entities, can model the symmetry pattern. RotatE BIBREF7, which represents entities as points in a complex space and relations as rotations, can model relation patterns including symmetry/antisymmetry, inversion, and composition. However, many existing models fail to model semantic hierarchies in knowledge graphs.
Semantic hierarchy is a ubiquitous property in knowledge graphs. For instance, WordNet BIBREF10 contains the triple [arbor/cassia/palm, hypernym, tree], where “tree” is at a higher level than “arbor/cassia/palm” in the hierarchy. Freebase BIBREF11 contains the triple [England, /location/location/contains, Pontefract/Lancaster], where “Pontefract/Lancaster” is at a lower level than “England” in the hierarchy. Although there exists some work that takes the hierarchy structures into account BIBREF12, BIBREF13, they usually require additional data or process to obtain the hierarchy information. Therefore, it is still challenging to find an approach that is capable of modeling the semantic hierarchy automatically and effectively.
In this paper, we propose a novel knowledge graph embedding model—namely, Hierarchy-Aware Knowledge Graph Embedding (HAKE). To model the semantic hierarchies, HAKE is expected to distinguish entities in two categories: (a) at different levels of the hierarchy; (b) at the same level of the hierarchy. Inspired by the fact that entities that have the hierarchical properties can be viewed as a tree, we can use the depth of a node (entity) to model different levels of the hierarchy. Thus, we use modulus information to model entities in the category (a), as the size of moduli can reflect the depth. Under the above settings, entities in the category (b) will have roughly the same modulus, which is hard to distinguish. Inspired by the fact that the points on the same circle can have different phases, we use phase information to model entities in the category (b). Combining the modulus and phase information, HAKE maps entities into the polar coordinate system, where the radial coordinate corresponds to the modulus information and the angular coordinate corresponds to the phase information. Experiments show that our proposed HAKE model can not only clearly distinguish the semantic hierarchies of entities, but also significantly and consistently outperform several state-of-the-art methods on the benchmark datasets.
Notations Throughout this paper, we use lower-case letters $h$, $r$, and $t$ to represent head entities, relations, and tail entities, respectively. The triplet $(h,r,t)$ denotes a fact in knowledge graphs. The corresponding boldface lower-case letters $\textbf {h}$, $\textbf {r}$ and $\textbf {t}$ denote the embeddings (vectors) of head entities, relations, and tail entities. The $i$-th entry of a vector $\textbf {h}$ is denoted as $[\textbf {h}]_i$. Let $k$ denote the embedding dimension.
Let $\circ :\mathbb {R}^n\times \mathbb {R}^n\rightarrow \mathbb {R}^n$ denote the Hadamard product between two vectors, that is,
and $\Vert \cdot \Vert _1$, $\Vert \cdot \Vert _2$ denote the $\ell _1$ and $\ell _2$ norm, respectively.
Related Work
In this section, we will describe the related work and the key differences between them and our work in two aspects—the model category and the way to model hierarchy structures in knowledge graphs.
Related Work ::: Model Category
Roughly speaking, we can divide knowledge graph embedding models into three categories—translational distance models, bilinear models, and neural network based models. Table TABREF2 exhibits several popular models.
Translational distance models describe relations as translations from source entities to target entities. TransE BIBREF8 supposes that entities and relations satisfy $\textbf {h}+\textbf {r}\approx \textbf {t}$, where $\textbf {h}, \textbf {r}, \textbf {t} \in \mathbb {R}^n$, and defines the corresponding score function as $f_r(\textbf {h},\textbf {t})=-\Vert \textbf {h}+\textbf {r}-\textbf {t}\Vert _{1/2}$. However, TransE does not perform well on 1-N, N-1 and N-N relations BIBREF14. TransH BIBREF14 overcomes the many-to-many relation problem by allowing entities to have distinct representations given different relations. The score function is defined as $f_r(\textbf {h},\textbf {t})=-\Vert \textbf {h}_{\perp }+\textbf {r}-\textbf {t}_{\perp }\Vert _2$, where $\textbf {h}_{\perp }$ and $\textbf {t}_{\perp }$ are the projections of entities onto relation-specific hyperplanes. ManifoldE BIBREF15 deals with many-to-many problems by relaxing the hypothesis $\textbf {h}+\textbf {r}\approx \textbf {t}$ to $\Vert \textbf {h}+\textbf {r}-\textbf {t}\Vert _2^2\approx \theta _r^2$ for each valid triple. In this way, the candidate entities can lie on a manifold instead of exact point. The corresponding score function is defined as $f_r(\textbf {h},\textbf {t})=-(\Vert \textbf {h}+\textbf {r}-\textbf {t}\Vert _2^2-\theta _r^2)^2$. More recently, to better model symmetric and antisymmetric relations, RotatE BIBREF7 defines each relation as a rotation from source entities to target entities in a complex vector space. The score function is defined as $f_r(\textbf {h},\textbf {t})=-\Vert \textbf {h}\circ \textbf {r}-\textbf {t}\Vert _1$, where $\textbf {h},\textbf {r},\textbf {t}\in \mathbb {C}^k$ and $|[\textbf {r}]_i|=1$.
Bilinear models product-based score functions to match latent semantics of entities and relations embodied in their vector space representations. RESCAL BIBREF16 represents each relation as a full rank matrix, and defines the score function as $f_r(\textbf {h},\textbf {t})=\textbf {h}^\top \textbf {M}_r \textbf {t}$, which can also be seen as a bilinear function. As full rank matrices are prone to overfitting, recent works turn to make additional assumptions on $\textbf {M}_r$. For example, DistMult BIBREF9 assumes $\textbf {M}_r$ to be a diagonal matrix, and ANALOGY BIBREF19 supposes that $\textbf {M}_r$ is normal. However, these simplified models are usually less expressive and not powerful enough for general knowledge graphs. Differently, ComplEx BIBREF17 extends DistMult by introducing complex-valued embeddings to better model asymmetric and inverse relations. HolE BIBREF20 combines the expressive power of RESCAL with the efficiency and simplicity of DistMult by using the circular correlation operation.
Neural network based models have received greater attention in recent years. For example, MLP BIBREF21 and NTN BIBREF22 use a fully connected neural network to determine the scores of given triples. ConvE BIBREF18 and ConvKB BIBREF23 employ convolutional neural networks to define score functions. Recently, graph convolutional networks are also introduced, as knowledge graphs obviously have graph structures BIBREF24.
Our proposed model HAKE belongs to the translational distance models. More specifically, HAKE shares similarities with RotatE BIBREF7, in which the authors claim that they use both modulus and phase information. However, there exist two major differences between RotatE and HAKE. Detailed differences are as follows.
The aims are different. RotatE aims to model the relation patterns including symmetry/antisymmetry, inversion, and composition. HAKE aims to model the semantic hierarchy, while it can also model all the relation patterns mentioned above.
The ways to use modulus information are different. RotatE models relations as rotations in the complex space, which encourages two linked entities to have the same modulus, no matter what the relation is. The different moduli in RotatE come from the inaccuracy in training. Instead, HAKE explicitly models the modulus information, which significantly outperforms RotatE in distinguishing entities at different levels of the hierarchy.
Related Work ::: The Ways to Model Hierarchy Structures
Another related problem is how to model hierarchy structures in knowledge graphs. Some recent work considers the problem in different ways. BIBREF25 embed entities and categories jointly into a semantic space and designs models for the concept categorization and dataless hierarchical classification tasks. BIBREF13 use clustering algorithms to model the hierarchical relation structures. BIBREF12 proposed TKRL, which embeds the type information into knowledge graph embeddings. That is, TKRL requires additional hierarchical type information for entities.
Different from the previous work, our work
considers the link prediction task, which is a more common task for knowledge graph embeddings;
can automatically learn the semantic hierarchy in knowledge graphs without using clustering algorithms;
does not require any additional information other than the triples in knowledge graphs.
The Proposed HAKE
In this section, we introduce our proposed model HAKE. We first introduce two categories of entities that reflect the semantic hierarchies in knowledge graphs. Afterwards, we introduce our proposed HAKE that can model entities in both of the categories.
The Proposed HAKE ::: Two Categories of Entities
To model the semantic hierarchies of knowledge graphs, a knowledge graph embedding model must be capable of distinguishing entities in the following two categories.
Entities at different levels of the hierarchy. For example, “mammal” and “dog”, “run” and ”move”.
Entities at the same level of the hierarchy. For example, “rose” and “peony”, “truck” and ”lorry”.
The Proposed HAKE ::: Hierarchy-Aware Knowledge Graph Embedding
To model both of the above categories, we propose a hierarchy-aware knowledge graph embedding model—HAKE. HAKE consists of two parts—the modulus part and the phase part—which aim to model entities in the two different categories, respectively. Figure FIGREF13 gives an illustration of the proposed model.
To distinguish embeddings in the different parts, we use $\textbf {e}_m$ ($\textbf {e}$ can be $\textbf {h}$ or $\textbf {t}$) and $\textbf {r}_m$ to denote the entity embedding and relation embedding in the modulus part, and use $\textbf {e}_p$ ($\textbf {e}$ can be $\textbf {h}$ or $\textbf {t}$) and $\textbf {r}_p$ to denote the entity embedding and relation embedding in the phase part.
The modulus part aims to model the entities at different levels of the hierarchy. Inspired by the fact that entities that have hierarchical property can be viewed as a tree, we can use the depth of a node (entity) to model different levels of the hierarchy. Therefore, we use modulus information to model entities in the category (a), as moduli can reflect the depth in a tree. Specifically, we regard each entry of $\textbf {h}_m$ and $\textbf {t}_m$, that is, $[\textbf {h}_m]_i$ and $[\textbf {t}_m]_i$, as a modulus, and regard each entry of $\textbf {r}_m$, that is, $[\textbf {r}]_i$, as a scaling transformation between two moduli. We can formulate the modulus part as follows:
The corresponding distance function is:
Note that we allow the entries of entity embeddings to be negative but restrict the entries of relation embeddings to be positive. This is because that the signs of entity embeddings can help us to predict whether there exists a relation between two entities. For example, if there exists a relation $r$ between $h$ and $t_1$, and no relation between $h$ and $t_2$, then $(h, r, t_1)$ is a positive sample and $(h, r, t_2)$ is a negative sample. Our goal is to minimize $d_r(\textbf {h}_m, \textbf {t}_{1,m})$ and maximize $d_r(\textbf {h}_m, \textbf {t}_{2,m})$, so as to make a clear distinction between positive and negative samples. For the positive sample, $[\textbf {h}]_i$ and $[\textbf {t}_1]_i$ tend to share the same sign, as $[\textbf {r}_m]_i>0$. For the negative sample, the signs of $[\textbf {h}_m]_i$ and $[\textbf {t}_{2,m}]_i$ can be different if we initialize their signs randomly. In this way, $d_r(\textbf {h}_m, \textbf {t}_{2,m})$ is more likely to be larger than $d_r(\textbf {h}_m, \textbf {t}_{1,m})$, which is exactly what we desire. We will validate this argument by experiments in Section 4 of the supplementary material.
Further, we can expect the entities at higher levels of the hierarchy to have smaller modulus, as these entities are more close to the root of the tree.
If we use only the modulus part to embed knowledge graphs, then the entities in the category (b) will have the same modulus. Moreover, suppose that $r$ is a relation that reflects the same semantic hierarchy, then $[\textbf {r}]_i$ will tend to be one, as $h\circ r\circ r=h$ holds for all $h$. Hence, embeddings of the entities in the category (b) tend to be the same, which makes it hard to distinguish these entities. Therefore, a new module is required to model the entities in the category (b).
The phase part aims to model the entities at the same level of the semantic hierarchy. Inspired by the fact that points on the same circle (that is, have the same modulus) can have different phases, we use phase information to distinguish entities in the category (b). Specifically, we regard each entry of $\textbf {h}_p$ and $\textbf {t}_p$, that is, $[\textbf {h}_p]_i$ and $[\textbf {t}_p]_i$ as a phase, and regard each entry of $\textbf {r}_p$, that is, $[\textbf {r}_p]_i$, as a phase transformation. We can formulate the phase part as follows:
The corresponding distance function is:
where $\sin (\cdot )$ is an operation that applies the sine function to each element of the input. Note that we use a sine function to measure the distance between phases instead of using $\Vert \textbf {h}_p+\textbf {r}_p-\textbf {t}_p\Vert _1$, as phases have periodic characteristic. This distance function shares the same formulation with that of pRotatE BIBREF7.
Combining the modulus part and the phase part, HAKE maps entities into the polar coordinate system, where the radial coordinate and the angular coordinates correspond to the modulus part and the phase part, respectively. That is, HAKE maps an entity $h$ to $[\textbf {h}_m;\textbf {h}_p]$, where $\textbf {h}_m$ and $\textbf {h}_p$ are generated by the modulus part and the phase part, respectively, and $[\,\cdot \,; \,\cdot \,]$ denotes the concatenation of two vectors. Obviously, $([\textbf {h}_m]_i,[\textbf {h}_p]_i)$ is a 2D point in the polar coordinate system. Specifically, we formulate HAKE as follows:
The distance function of HAKE is:
where $\lambda \in \mathbb {R}$ is a parameter that learned by the model. The corresponding score function is
When two entities have the same moduli, then the modulus part $d_{r,m}(\textbf {h}_m,\textbf {t}_m)=0$. However, the phase part $d_{r,p}(\textbf {h}_p,\textbf {t}_p)$ can be very different. By combining the modulus part and the phase part, HAKE can model the entities in both the category (a) and the category (b). Therefore, HAKE can model semantic hierarchies of knowledge graphs.
When evaluating the models, we find that adding a mixture bias to $d_{r,m}(\textbf {h},\textbf {t})$ can help to improve the performance of HAKE. The modified $d_{r,m}(\textbf {h},\textbf {t})$ is given by:
where $0<\textbf {r}^{\prime }_m<1$ is a vector that have the same dimension with $\textbf {r}_m$. Indeed, the above distance function is equivalent to
where $/$ denotes the element-wise division operation. If we let $\textbf {r}_m\leftarrow (1-\textbf {r}_m^{\prime })/(\textbf {r}_m+\textbf {r}_m^{\prime })$, then the modified distance function is exactly the same as the original one when compare the distances of different entity pairs. For notation convenience, we still use $d_{r,m}(\textbf {h},\textbf {t})=\Vert \textbf {h}_m\circ \textbf {r}_m-\textbf {t}_m\Vert _2$ to represent the modulus part. We will conduct ablation studies on the bias in the experiment section.
The Proposed HAKE ::: Loss Function
To train the model, we use the negative sampling loss functions with self-adversarial training BIBREF7:
where $\gamma $ is a fixed margin, $\sigma $ is the sigmoid function, and $(h^{\prime }_i,r,t^{\prime }_i)$ is the $i$th negative triple. Moreover,
is the probability distribution of sampling negative triples, where $\alpha $ is the temperature of sampling.
Experiments and Analysis
This section is organized as follows. First, we introduce the experimental settings in detail. Then, we show the effectiveness of our proposed model on three benchmark datasets. Finally, we analyze the embeddings generated by HAKE, and show the results of ablation studies. The code of HAKE is available on GitHub at https://github.com/MIRALab-USTC/KGE-HAKE.
Experiments and Analysis ::: Experimental Settings
We evaluate our proposed models on three commonly used knowledge graph datasets—WN18RR BIBREF26, FB15k-237 BIBREF18, and YAGO3-10 BIBREF27. Details of these datasets are summarized in Table TABREF18.
WN18RR, FB15k-237, and YAGO3-10 are subsets of WN18 BIBREF8, FB15k BIBREF8, and YAGO3 BIBREF27, respectively. As pointed out by BIBREF26 and BIBREF18, WN18 and FB15k suffer from the test set leakage problem. One can attain the state-of-the-art results even using a simple rule based model. Therefore, we use WN18RR and FB15k-237 as the benchmark datasets.
Evaluation Protocol Following BIBREF8, for each triple $(h,r,t)$ in the test dataset, we replace either the head entity $h$ or the tail entity $t$ with each candidate entity to create a set of candidate triples. We then rank the candidate triples in descending order by their scores. It is worth noting that we use the “Filtered” setting as in BIBREF8, which does not take any existing valid triples into accounts at ranking. We choose Mean Reciprocal Rank (MRR) and Hits at N (H@N) as the evaluation metrics. Higher MRR or H@N indicate better performance.
Training Protocol We use Adam BIBREF28 as the optimizer, and use grid search to find the best hyperparameters based on the performance on the validation datasets. To make the model easier to train, we add an additional coefficient to the distance function, i.e., $d_{r}(\textbf {h},\textbf {t})=\lambda _1d_{r,m}(\textbf {h}_m,\textbf {t}_m)+\lambda _2 d_{r,p}(\textbf {h}_p,\textbf {t}_p)$, where $\lambda _1,\lambda _2\in \mathbb {R}$.
Baseline Model One may argue that the phase part is unnecessary, as we can distinguish entities in the category (b) by allowing $[\textbf {r}]_i$ to be negative. We propose a model—ModE—that uses only the modulus part but allow $[\textbf {r}]_i<0$. Specifically, the distance function of ModE is
Experiments and Analysis ::: Main Results
In this part, we show the performance of our proposed models—HAKE and ModE—against existing state-of-the-art methods, including TransE BIBREF8, DistMult BIBREF9, ComplEx BIBREF17, ConvE BIBREF18, and RotatE BIBREF7.
Table TABREF19 shows the performance of HAKE, ModE, and several previous models. Our baseline model ModE shares similar simplicity with TransE, but significantly outperforms it on all datasets. Surprisingly, ModE even outperforms more complex models such as DistMult, ConvE and Complex on all datasets, and beats the state-of-the-art model—RotatE—on FB15k-237 and YAGO3-10 datasets, which demonstrates the great power of modulus information. Table TABREF19 also shows that our HAKE significantly outperforms existing state-of-the-art methods on all datasets.
WN18RR dataset consists of two kinds of relations: the symmetric relations such as $\_similar\_to$, which link entities in the category (b); other relations such as $\_hypernym$ and $\_member\_meronym$, which link entities in the category (a). Actually, RotatE can model entities in the category (b) very well BIBREF7. However, HAKE gains a 0.021 higher MRR, a 2.4% higher H@1, and a 2.4% higher H@3 against RotatE, respectively. The superior performance of HAKE compared with RotatE implies that our proposed model can better model different levels in the hierarchy.
FB15k-237 dataset has more complex relation types and fewer entities, compared with WN18RR and YAGO3-10. Although there are relations that reflect hierarchy in FB15k-237, there are also lots of relations, such as “/location/location/time_zones” and “/film/film/prequel”, that do not lead to hierarchy. The characteristic of this dataset accounts for why our proposed models doesn't outperform the previous state-of-the-art as much as that of WN18RR and YAGO3-10 datasets. However, the results also show that our models can gain better performance so long as there exists semantic hierarchies in knowledge graphs. As almost all knowledge graphs have such hierarchy structures, our model is widely applicable.
YAGO3-10 datasets contains entities with high relation-specific indegree BIBREF18. For example, the link prediction task $(?, hasGender, male)$ has over 1000 true answers, which makes the task challenging. Fortunately, we can regard “male” as an entity at higher level of the hierarchy and the predicted head entities as entities at lower level. In this way, YAGO3-10 is a dataset that clearly has semantic hierarchy property, and we can expect that our proposed models is capable of working well on this dataset. Table TABREF19 validates our expectation. Both ModE and HAKE significantly outperform the previous state-of-the-art. Notably, HAKE gains a 0.050 higher MRR, 6.0% higher H@1 and 4.6% higher H@3 than RotatE, respectively.
Experiments and Analysis ::: Analysis on Relation Embeddings
In this part, we first show that HAKE can effectively model the hierarchy structures by analyzing the moduli of relation embeddings. Then, we show that the phase part of HAKE can help us to distinguish entities at the same level of the hierarchy by analyzing the phases of relation embeddings.
In Figure FIGREF20, we plot the distribution histograms of moduli of six relations. These relations are drawn from WN18RR, FB15k-237, and YAGO3-10. Specifically, the relations in Figures FIGREF20a, FIGREF20c, FIGREF20e and FIGREF20f are drawn from WN18RR. The relation in Figure FIGREF20d is drawn from FB15k-237. The relation in Figure FIGREF20b is drawn from YAGO3-10. We divide the relations in Figure FIGREF20 into three groups.
Relations in Figures FIGREF20c and FIGREF20d connect the entities at the same level of the semantic hierarchy;
Relations in Figures FIGREF20a and FIGREF20b represent that tail entities are at higher levels than head entities of the hierarchy;
Relations in Figures FIGREF20e and FIGREF20f represent that tail entities are at lower levels than head entities of the hierarchy.
As described in the model description section, we expect entities at higher levels of the hierarchy to have small moduli. The experiments validate our expectation. For both ModE and HAKE, most entries of the relations in the group (A) take values around one, which leads to that the head entities and tail entities have approximately the same moduli. In the group (B), most entries of the relations take values less than one, which results in that the head entities have smaller moduli than the tail entities. The cases in the group (C) are contrary to that in the group (B). These results show that our model can capture the semantic hierarchies in knowledge graphs. Moreover, compared with ModE, the relation embeddings' moduli of HAKE have lower variances, which shows that HAKE can model hierarchies more clearly.
As mentioned above, relations in the group (A) reflect the same semantic hierarchy, and are expected to have the moduli of about one. Obviously, it is hard to distinguish entities linked by these relations only using the modulus part. In Figure FIGREF22, we plot the phases of the relations in the group (A). The results show that the entities at the same level of the hierarchy can be distinguished by their phases, as many phases have the values of $\pi $.
Experiments and Analysis ::: Analysis on Entity Embeddings
In this part, to further show that HAKE can capture the semantic hierarchies between entities, we visualize the embeddings of several entity pairs.
We plot the entity embeddings of two models: the previous state-of-the-art RotatE and our proposed HAKE. RotatE regards each entity as a group of complex numbers. As a complex number can be seen as a point on a 2D plane, we can plot the entity embeddings on a 2D plane. As for HAKE, we have mentioned that it maps entities into the polar coordinate system. Therefore, we can also plot the entity embeddings generated by HAKE on a 2D plane based on their polar coordinates. For a fair comparison, we set $k=500$. That is, each plot contains 500 points, and the actual dimension of entity embeddings is 1000. Note that we use the logarithmic scale to better display the differences between entity embeddings. As all the moduli have values less than one, after applying the logarithm operation, the larger radii in the figures will actually represent smaller modulus.
Figure FIGREF29 shows the visualization results of three triples from the WN18RR dataset. Compared with the tail entities, the head entities in Figures FIGREF29a, FIGREF29b, and FIGREF29c are at lower levels, similar levels, higher levels in the semantic hierarchy, respectively. We can see that there exist clear concentric circles in the visualization results of HAKE, which demonstrates that HAKE can effectively model the semantic hierarchies. However, in RotatE, the entity embeddings in all three subfigures are mixed, making it hard to distinguish entities at different levels in the hierarchy.
Experiments and Analysis ::: Ablation Studies
In this part, we conduct ablation studies on the modulus part and the phase part of HAKE, as well as the mixture bias item. Table TABREF26 shows the results on three benchmark datasets.
We can see that the bias can improve the performance of HAKE on nearly all metrics. Specifically, the bias improves the H@1 score of $4.7\%$ on YAGO3-10 dataset, which illustrates the effectiveness of the bias.
We also observe that the modulus part of HAKE does not perform well on all datasets, due to its inability to distinguish the entities at the same level of the hierarchy. When only using the phase part, HAKE degenerates to the pRotatE model BIBREF7. It performs better than the modulus part, because it can well model entities at the same level of the hierarchy. However, our HAKE model significantly outperforms the modulus part and the phase part on all datasets, which demonstrates the importance to combine the two parts for modeling semantic hierarchies in knowledge graphs.
Experiments and Analysis ::: Comparison with Other Related Work
We compare our models with TKRL models BIBREF12, which also aim to model the hierarchy structures. For the difference between HAKE and TKRL, please refer to the Related Work section. Table TABREF27 shows the H@10 scores of HAKE and TKRLs on FB15k dataset. The best performance of TKRL is .734 obtained by the WHE+STC version, while the H@10 score of our HAKE model is .884. The results show that HAKE significantly outperforms TKRL, though it does not require additional information.
Conclusion
To model the semantic hierarchies in knowledge graphs, we propose a novel hierarchy-aware knowledge graph embedding model—HAKE—which maps entities into the polar coordinate system. Experiments show that our proposed HAKE significantly outperforms several existing state-of-the-art methods on benchmark datasets for the link prediction task. A further investigation shows that HAKE is capable of modeling entities at both different levels and the same levels in the semantic hierarchies.
Appendix
In this appendix, we will provide analysis on relation patterns, negative entity embeddings, and moduli of entity embeddings. Then, we will give more visualization results on semantic hierarchies.
A. Analysis on Relation Patterns
In this section, we prove that our HAKE model can infer the (anti)symmetry, inversion and composition relation patterns. Detailed propositions and their proofs are as follows.
Proposition 1 HAKE can infer the (anti)symmetry pattern.
If $r(x, y)$ and $r(y, x)$ hold, we have
Then we have
Otherwise, if $r(x, y)$ and $\lnot r(y, x)$ hold, we have
Proposition 2 HAKE can infer the inversion pattern.
If $r_1(x, y)$ and $r_2(y, x)$ hold, we have
Then, we have
Proposition 3 HAKE can infer the composition pattern.
If $r_1(x, z)$, $r_2(x, y)$ and $r_3(y, z)$ hold, we have
Then we have
B. Analysis on Negative Entity Embeddings
We denote the linked entity pairs as the set of entity pairs linked by some relation, and denote the unlinked entity pairs as the set of entity pairs that no triple contains in the train/valid/test dataset. It is worth noting that the unlinked paris may contain valid triples, as the knowledge graph is incomplete. For both the linked and the unlinked entity pairs, we count the embedding entries of two entities that have different signs. Figure FIGREF34 shows the result.
For the linked entity pairs, as we expected, most of the entries have the same sign. Due to the large amount of unlinked entity pairs, we randomly sample a part of them for plotting. For the unlinked entity pairs, around half of the entries have different signs, which is consistent with the random initialization. The results support our hypothesis that the negative signs of entity embeddings can help our model to distinguish positive and negative triples.
C. Analysis on Moduli of Entity Embeddings
Figure FIGREF37 shows the modulus of entity embeddings. We can observe that RotatE encourages the modulus of embeddings to be the same, as the relations are modeled as rotations in a complex space. Compared with RotatE, the modulus of entity embeddings in HAKE are more dispersed, making it to have more potential to model the semantic hierarchies.
D. More Results on Semantic Hierarchies
In this part, we visualize more triples from WN18RR. We plot the head and tail entities on 2D planes using the same method as that in the main text. The visualization results are in Figure FIGREF41, where the subcaptions demonstrate the corresponding triples. The figures show that, compared with RotatE, our HAKE model can better model the entities both in different hierarchies and in the same hierarchy. | 0.021 higher MRR, a 2.4% higher H@1, and a 2.4% higher H@3 against RotatE, respectively, doesn't outperform the previous state-of-the-art as much as that of WN18RR and YAGO3-10, HAKE gains a 0.050 higher MRR, 6.0% higher H@1 and 4.6% higher H@3 than RotatE, respectively |
2a058f8f6bd6f8e80e8452e1dba9f8db5e3c7de8 | 2a058f8f6bd6f8e80e8452e1dba9f8db5e3c7de8_0 | Q: How are entities mapped onto polar coordinate system?
Text: Introduction
Knowledge graphs are usually collections of factual triples—(head entity, relation, tail entity), which represent human knowledge in a structured way. In the past few years, we have witnessed the great achievement of knowledge graphs in many areas, such as natural language processing BIBREF0, question answering BIBREF1, and recommendation systems BIBREF2.
Although commonly used knowledge graphs contain billions of triples, they still suffer from the incompleteness problem that a lot of valid triples are missing, as it is impractical to find all valid triples manually. Therefore, knowledge graph completion, also known as link prediction in knowledge graphs, has attracted much attention recently. Link prediction aims to automatically predict missing links between entities based on known links. It is a challenging task as we not only need to predict whether there is a relation between two entities, but also need to determine which relation it is.
Inspired by word embeddings BIBREF3 that can well capture semantic meaning of words, researchers turn to distributed representations of knowledge graphs (aka, knowledge graph embeddings) to deal with the link prediction problem. Knowledge graph embeddings regard entities and relations as low dimensional vectors (or matrices, tensors), which can be stored and computed efficiently. Moreover, like in the case of word embeddings, knowledge graph embeddings can preserve the semantics and inherent structures of entities and relations. Therefore, other than the link prediction task, knowledge graph embeddings can also be used in various downstream tasks, such as triple classification BIBREF4, relation inference BIBREF5, and search personalization BIBREF6.
The success of existing knowledge graph embedding models heavily relies on their ability to model connectivity patterns of the relations, such as symmetry/antisymmetry, inversion, and composition BIBREF7. For example, TransE BIBREF8, which represent relations as translations, can model the inversion and composition patterns. DistMult BIBREF9, which models the three-way interactions between head entities, relations, and tail entities, can model the symmetry pattern. RotatE BIBREF7, which represents entities as points in a complex space and relations as rotations, can model relation patterns including symmetry/antisymmetry, inversion, and composition. However, many existing models fail to model semantic hierarchies in knowledge graphs.
Semantic hierarchy is a ubiquitous property in knowledge graphs. For instance, WordNet BIBREF10 contains the triple [arbor/cassia/palm, hypernym, tree], where “tree” is at a higher level than “arbor/cassia/palm” in the hierarchy. Freebase BIBREF11 contains the triple [England, /location/location/contains, Pontefract/Lancaster], where “Pontefract/Lancaster” is at a lower level than “England” in the hierarchy. Although there exists some work that takes the hierarchy structures into account BIBREF12, BIBREF13, they usually require additional data or process to obtain the hierarchy information. Therefore, it is still challenging to find an approach that is capable of modeling the semantic hierarchy automatically and effectively.
In this paper, we propose a novel knowledge graph embedding model—namely, Hierarchy-Aware Knowledge Graph Embedding (HAKE). To model the semantic hierarchies, HAKE is expected to distinguish entities in two categories: (a) at different levels of the hierarchy; (b) at the same level of the hierarchy. Inspired by the fact that entities that have the hierarchical properties can be viewed as a tree, we can use the depth of a node (entity) to model different levels of the hierarchy. Thus, we use modulus information to model entities in the category (a), as the size of moduli can reflect the depth. Under the above settings, entities in the category (b) will have roughly the same modulus, which is hard to distinguish. Inspired by the fact that the points on the same circle can have different phases, we use phase information to model entities in the category (b). Combining the modulus and phase information, HAKE maps entities into the polar coordinate system, where the radial coordinate corresponds to the modulus information and the angular coordinate corresponds to the phase information. Experiments show that our proposed HAKE model can not only clearly distinguish the semantic hierarchies of entities, but also significantly and consistently outperform several state-of-the-art methods on the benchmark datasets.
Notations Throughout this paper, we use lower-case letters $h$, $r$, and $t$ to represent head entities, relations, and tail entities, respectively. The triplet $(h,r,t)$ denotes a fact in knowledge graphs. The corresponding boldface lower-case letters $\textbf {h}$, $\textbf {r}$ and $\textbf {t}$ denote the embeddings (vectors) of head entities, relations, and tail entities. The $i$-th entry of a vector $\textbf {h}$ is denoted as $[\textbf {h}]_i$. Let $k$ denote the embedding dimension.
Let $\circ :\mathbb {R}^n\times \mathbb {R}^n\rightarrow \mathbb {R}^n$ denote the Hadamard product between two vectors, that is,
and $\Vert \cdot \Vert _1$, $\Vert \cdot \Vert _2$ denote the $\ell _1$ and $\ell _2$ norm, respectively.
Related Work
In this section, we will describe the related work and the key differences between them and our work in two aspects—the model category and the way to model hierarchy structures in knowledge graphs.
Related Work ::: Model Category
Roughly speaking, we can divide knowledge graph embedding models into three categories—translational distance models, bilinear models, and neural network based models. Table TABREF2 exhibits several popular models.
Translational distance models describe relations as translations from source entities to target entities. TransE BIBREF8 supposes that entities and relations satisfy $\textbf {h}+\textbf {r}\approx \textbf {t}$, where $\textbf {h}, \textbf {r}, \textbf {t} \in \mathbb {R}^n$, and defines the corresponding score function as $f_r(\textbf {h},\textbf {t})=-\Vert \textbf {h}+\textbf {r}-\textbf {t}\Vert _{1/2}$. However, TransE does not perform well on 1-N, N-1 and N-N relations BIBREF14. TransH BIBREF14 overcomes the many-to-many relation problem by allowing entities to have distinct representations given different relations. The score function is defined as $f_r(\textbf {h},\textbf {t})=-\Vert \textbf {h}_{\perp }+\textbf {r}-\textbf {t}_{\perp }\Vert _2$, where $\textbf {h}_{\perp }$ and $\textbf {t}_{\perp }$ are the projections of entities onto relation-specific hyperplanes. ManifoldE BIBREF15 deals with many-to-many problems by relaxing the hypothesis $\textbf {h}+\textbf {r}\approx \textbf {t}$ to $\Vert \textbf {h}+\textbf {r}-\textbf {t}\Vert _2^2\approx \theta _r^2$ for each valid triple. In this way, the candidate entities can lie on a manifold instead of exact point. The corresponding score function is defined as $f_r(\textbf {h},\textbf {t})=-(\Vert \textbf {h}+\textbf {r}-\textbf {t}\Vert _2^2-\theta _r^2)^2$. More recently, to better model symmetric and antisymmetric relations, RotatE BIBREF7 defines each relation as a rotation from source entities to target entities in a complex vector space. The score function is defined as $f_r(\textbf {h},\textbf {t})=-\Vert \textbf {h}\circ \textbf {r}-\textbf {t}\Vert _1$, where $\textbf {h},\textbf {r},\textbf {t}\in \mathbb {C}^k$ and $|[\textbf {r}]_i|=1$.
Bilinear models product-based score functions to match latent semantics of entities and relations embodied in their vector space representations. RESCAL BIBREF16 represents each relation as a full rank matrix, and defines the score function as $f_r(\textbf {h},\textbf {t})=\textbf {h}^\top \textbf {M}_r \textbf {t}$, which can also be seen as a bilinear function. As full rank matrices are prone to overfitting, recent works turn to make additional assumptions on $\textbf {M}_r$. For example, DistMult BIBREF9 assumes $\textbf {M}_r$ to be a diagonal matrix, and ANALOGY BIBREF19 supposes that $\textbf {M}_r$ is normal. However, these simplified models are usually less expressive and not powerful enough for general knowledge graphs. Differently, ComplEx BIBREF17 extends DistMult by introducing complex-valued embeddings to better model asymmetric and inverse relations. HolE BIBREF20 combines the expressive power of RESCAL with the efficiency and simplicity of DistMult by using the circular correlation operation.
Neural network based models have received greater attention in recent years. For example, MLP BIBREF21 and NTN BIBREF22 use a fully connected neural network to determine the scores of given triples. ConvE BIBREF18 and ConvKB BIBREF23 employ convolutional neural networks to define score functions. Recently, graph convolutional networks are also introduced, as knowledge graphs obviously have graph structures BIBREF24.
Our proposed model HAKE belongs to the translational distance models. More specifically, HAKE shares similarities with RotatE BIBREF7, in which the authors claim that they use both modulus and phase information. However, there exist two major differences between RotatE and HAKE. Detailed differences are as follows.
The aims are different. RotatE aims to model the relation patterns including symmetry/antisymmetry, inversion, and composition. HAKE aims to model the semantic hierarchy, while it can also model all the relation patterns mentioned above.
The ways to use modulus information are different. RotatE models relations as rotations in the complex space, which encourages two linked entities to have the same modulus, no matter what the relation is. The different moduli in RotatE come from the inaccuracy in training. Instead, HAKE explicitly models the modulus information, which significantly outperforms RotatE in distinguishing entities at different levels of the hierarchy.
Related Work ::: The Ways to Model Hierarchy Structures
Another related problem is how to model hierarchy structures in knowledge graphs. Some recent work considers the problem in different ways. BIBREF25 embed entities and categories jointly into a semantic space and designs models for the concept categorization and dataless hierarchical classification tasks. BIBREF13 use clustering algorithms to model the hierarchical relation structures. BIBREF12 proposed TKRL, which embeds the type information into knowledge graph embeddings. That is, TKRL requires additional hierarchical type information for entities.
Different from the previous work, our work
considers the link prediction task, which is a more common task for knowledge graph embeddings;
can automatically learn the semantic hierarchy in knowledge graphs without using clustering algorithms;
does not require any additional information other than the triples in knowledge graphs.
The Proposed HAKE
In this section, we introduce our proposed model HAKE. We first introduce two categories of entities that reflect the semantic hierarchies in knowledge graphs. Afterwards, we introduce our proposed HAKE that can model entities in both of the categories.
The Proposed HAKE ::: Two Categories of Entities
To model the semantic hierarchies of knowledge graphs, a knowledge graph embedding model must be capable of distinguishing entities in the following two categories.
Entities at different levels of the hierarchy. For example, “mammal” and “dog”, “run” and ”move”.
Entities at the same level of the hierarchy. For example, “rose” and “peony”, “truck” and ”lorry”.
The Proposed HAKE ::: Hierarchy-Aware Knowledge Graph Embedding
To model both of the above categories, we propose a hierarchy-aware knowledge graph embedding model—HAKE. HAKE consists of two parts—the modulus part and the phase part—which aim to model entities in the two different categories, respectively. Figure FIGREF13 gives an illustration of the proposed model.
To distinguish embeddings in the different parts, we use $\textbf {e}_m$ ($\textbf {e}$ can be $\textbf {h}$ or $\textbf {t}$) and $\textbf {r}_m$ to denote the entity embedding and relation embedding in the modulus part, and use $\textbf {e}_p$ ($\textbf {e}$ can be $\textbf {h}$ or $\textbf {t}$) and $\textbf {r}_p$ to denote the entity embedding and relation embedding in the phase part.
The modulus part aims to model the entities at different levels of the hierarchy. Inspired by the fact that entities that have hierarchical property can be viewed as a tree, we can use the depth of a node (entity) to model different levels of the hierarchy. Therefore, we use modulus information to model entities in the category (a), as moduli can reflect the depth in a tree. Specifically, we regard each entry of $\textbf {h}_m$ and $\textbf {t}_m$, that is, $[\textbf {h}_m]_i$ and $[\textbf {t}_m]_i$, as a modulus, and regard each entry of $\textbf {r}_m$, that is, $[\textbf {r}]_i$, as a scaling transformation between two moduli. We can formulate the modulus part as follows:
The corresponding distance function is:
Note that we allow the entries of entity embeddings to be negative but restrict the entries of relation embeddings to be positive. This is because that the signs of entity embeddings can help us to predict whether there exists a relation between two entities. For example, if there exists a relation $r$ between $h$ and $t_1$, and no relation between $h$ and $t_2$, then $(h, r, t_1)$ is a positive sample and $(h, r, t_2)$ is a negative sample. Our goal is to minimize $d_r(\textbf {h}_m, \textbf {t}_{1,m})$ and maximize $d_r(\textbf {h}_m, \textbf {t}_{2,m})$, so as to make a clear distinction between positive and negative samples. For the positive sample, $[\textbf {h}]_i$ and $[\textbf {t}_1]_i$ tend to share the same sign, as $[\textbf {r}_m]_i>0$. For the negative sample, the signs of $[\textbf {h}_m]_i$ and $[\textbf {t}_{2,m}]_i$ can be different if we initialize their signs randomly. In this way, $d_r(\textbf {h}_m, \textbf {t}_{2,m})$ is more likely to be larger than $d_r(\textbf {h}_m, \textbf {t}_{1,m})$, which is exactly what we desire. We will validate this argument by experiments in Section 4 of the supplementary material.
Further, we can expect the entities at higher levels of the hierarchy to have smaller modulus, as these entities are more close to the root of the tree.
If we use only the modulus part to embed knowledge graphs, then the entities in the category (b) will have the same modulus. Moreover, suppose that $r$ is a relation that reflects the same semantic hierarchy, then $[\textbf {r}]_i$ will tend to be one, as $h\circ r\circ r=h$ holds for all $h$. Hence, embeddings of the entities in the category (b) tend to be the same, which makes it hard to distinguish these entities. Therefore, a new module is required to model the entities in the category (b).
The phase part aims to model the entities at the same level of the semantic hierarchy. Inspired by the fact that points on the same circle (that is, have the same modulus) can have different phases, we use phase information to distinguish entities in the category (b). Specifically, we regard each entry of $\textbf {h}_p$ and $\textbf {t}_p$, that is, $[\textbf {h}_p]_i$ and $[\textbf {t}_p]_i$ as a phase, and regard each entry of $\textbf {r}_p$, that is, $[\textbf {r}_p]_i$, as a phase transformation. We can formulate the phase part as follows:
The corresponding distance function is:
where $\sin (\cdot )$ is an operation that applies the sine function to each element of the input. Note that we use a sine function to measure the distance between phases instead of using $\Vert \textbf {h}_p+\textbf {r}_p-\textbf {t}_p\Vert _1$, as phases have periodic characteristic. This distance function shares the same formulation with that of pRotatE BIBREF7.
Combining the modulus part and the phase part, HAKE maps entities into the polar coordinate system, where the radial coordinate and the angular coordinates correspond to the modulus part and the phase part, respectively. That is, HAKE maps an entity $h$ to $[\textbf {h}_m;\textbf {h}_p]$, where $\textbf {h}_m$ and $\textbf {h}_p$ are generated by the modulus part and the phase part, respectively, and $[\,\cdot \,; \,\cdot \,]$ denotes the concatenation of two vectors. Obviously, $([\textbf {h}_m]_i,[\textbf {h}_p]_i)$ is a 2D point in the polar coordinate system. Specifically, we formulate HAKE as follows:
The distance function of HAKE is:
where $\lambda \in \mathbb {R}$ is a parameter that learned by the model. The corresponding score function is
When two entities have the same moduli, then the modulus part $d_{r,m}(\textbf {h}_m,\textbf {t}_m)=0$. However, the phase part $d_{r,p}(\textbf {h}_p,\textbf {t}_p)$ can be very different. By combining the modulus part and the phase part, HAKE can model the entities in both the category (a) and the category (b). Therefore, HAKE can model semantic hierarchies of knowledge graphs.
When evaluating the models, we find that adding a mixture bias to $d_{r,m}(\textbf {h},\textbf {t})$ can help to improve the performance of HAKE. The modified $d_{r,m}(\textbf {h},\textbf {t})$ is given by:
where $0<\textbf {r}^{\prime }_m<1$ is a vector that have the same dimension with $\textbf {r}_m$. Indeed, the above distance function is equivalent to
where $/$ denotes the element-wise division operation. If we let $\textbf {r}_m\leftarrow (1-\textbf {r}_m^{\prime })/(\textbf {r}_m+\textbf {r}_m^{\prime })$, then the modified distance function is exactly the same as the original one when compare the distances of different entity pairs. For notation convenience, we still use $d_{r,m}(\textbf {h},\textbf {t})=\Vert \textbf {h}_m\circ \textbf {r}_m-\textbf {t}_m\Vert _2$ to represent the modulus part. We will conduct ablation studies on the bias in the experiment section.
The Proposed HAKE ::: Loss Function
To train the model, we use the negative sampling loss functions with self-adversarial training BIBREF7:
where $\gamma $ is a fixed margin, $\sigma $ is the sigmoid function, and $(h^{\prime }_i,r,t^{\prime }_i)$ is the $i$th negative triple. Moreover,
is the probability distribution of sampling negative triples, where $\alpha $ is the temperature of sampling.
Experiments and Analysis
This section is organized as follows. First, we introduce the experimental settings in detail. Then, we show the effectiveness of our proposed model on three benchmark datasets. Finally, we analyze the embeddings generated by HAKE, and show the results of ablation studies. The code of HAKE is available on GitHub at https://github.com/MIRALab-USTC/KGE-HAKE.
Experiments and Analysis ::: Experimental Settings
We evaluate our proposed models on three commonly used knowledge graph datasets—WN18RR BIBREF26, FB15k-237 BIBREF18, and YAGO3-10 BIBREF27. Details of these datasets are summarized in Table TABREF18.
WN18RR, FB15k-237, and YAGO3-10 are subsets of WN18 BIBREF8, FB15k BIBREF8, and YAGO3 BIBREF27, respectively. As pointed out by BIBREF26 and BIBREF18, WN18 and FB15k suffer from the test set leakage problem. One can attain the state-of-the-art results even using a simple rule based model. Therefore, we use WN18RR and FB15k-237 as the benchmark datasets.
Evaluation Protocol Following BIBREF8, for each triple $(h,r,t)$ in the test dataset, we replace either the head entity $h$ or the tail entity $t$ with each candidate entity to create a set of candidate triples. We then rank the candidate triples in descending order by their scores. It is worth noting that we use the “Filtered” setting as in BIBREF8, which does not take any existing valid triples into accounts at ranking. We choose Mean Reciprocal Rank (MRR) and Hits at N (H@N) as the evaluation metrics. Higher MRR or H@N indicate better performance.
Training Protocol We use Adam BIBREF28 as the optimizer, and use grid search to find the best hyperparameters based on the performance on the validation datasets. To make the model easier to train, we add an additional coefficient to the distance function, i.e., $d_{r}(\textbf {h},\textbf {t})=\lambda _1d_{r,m}(\textbf {h}_m,\textbf {t}_m)+\lambda _2 d_{r,p}(\textbf {h}_p,\textbf {t}_p)$, where $\lambda _1,\lambda _2\in \mathbb {R}$.
Baseline Model One may argue that the phase part is unnecessary, as we can distinguish entities in the category (b) by allowing $[\textbf {r}]_i$ to be negative. We propose a model—ModE—that uses only the modulus part but allow $[\textbf {r}]_i<0$. Specifically, the distance function of ModE is
Experiments and Analysis ::: Main Results
In this part, we show the performance of our proposed models—HAKE and ModE—against existing state-of-the-art methods, including TransE BIBREF8, DistMult BIBREF9, ComplEx BIBREF17, ConvE BIBREF18, and RotatE BIBREF7.
Table TABREF19 shows the performance of HAKE, ModE, and several previous models. Our baseline model ModE shares similar simplicity with TransE, but significantly outperforms it on all datasets. Surprisingly, ModE even outperforms more complex models such as DistMult, ConvE and Complex on all datasets, and beats the state-of-the-art model—RotatE—on FB15k-237 and YAGO3-10 datasets, which demonstrates the great power of modulus information. Table TABREF19 also shows that our HAKE significantly outperforms existing state-of-the-art methods on all datasets.
WN18RR dataset consists of two kinds of relations: the symmetric relations such as $\_similar\_to$, which link entities in the category (b); other relations such as $\_hypernym$ and $\_member\_meronym$, which link entities in the category (a). Actually, RotatE can model entities in the category (b) very well BIBREF7. However, HAKE gains a 0.021 higher MRR, a 2.4% higher H@1, and a 2.4% higher H@3 against RotatE, respectively. The superior performance of HAKE compared with RotatE implies that our proposed model can better model different levels in the hierarchy.
FB15k-237 dataset has more complex relation types and fewer entities, compared with WN18RR and YAGO3-10. Although there are relations that reflect hierarchy in FB15k-237, there are also lots of relations, such as “/location/location/time_zones” and “/film/film/prequel”, that do not lead to hierarchy. The characteristic of this dataset accounts for why our proposed models doesn't outperform the previous state-of-the-art as much as that of WN18RR and YAGO3-10 datasets. However, the results also show that our models can gain better performance so long as there exists semantic hierarchies in knowledge graphs. As almost all knowledge graphs have such hierarchy structures, our model is widely applicable.
YAGO3-10 datasets contains entities with high relation-specific indegree BIBREF18. For example, the link prediction task $(?, hasGender, male)$ has over 1000 true answers, which makes the task challenging. Fortunately, we can regard “male” as an entity at higher level of the hierarchy and the predicted head entities as entities at lower level. In this way, YAGO3-10 is a dataset that clearly has semantic hierarchy property, and we can expect that our proposed models is capable of working well on this dataset. Table TABREF19 validates our expectation. Both ModE and HAKE significantly outperform the previous state-of-the-art. Notably, HAKE gains a 0.050 higher MRR, 6.0% higher H@1 and 4.6% higher H@3 than RotatE, respectively.
Experiments and Analysis ::: Analysis on Relation Embeddings
In this part, we first show that HAKE can effectively model the hierarchy structures by analyzing the moduli of relation embeddings. Then, we show that the phase part of HAKE can help us to distinguish entities at the same level of the hierarchy by analyzing the phases of relation embeddings.
In Figure FIGREF20, we plot the distribution histograms of moduli of six relations. These relations are drawn from WN18RR, FB15k-237, and YAGO3-10. Specifically, the relations in Figures FIGREF20a, FIGREF20c, FIGREF20e and FIGREF20f are drawn from WN18RR. The relation in Figure FIGREF20d is drawn from FB15k-237. The relation in Figure FIGREF20b is drawn from YAGO3-10. We divide the relations in Figure FIGREF20 into three groups.
Relations in Figures FIGREF20c and FIGREF20d connect the entities at the same level of the semantic hierarchy;
Relations in Figures FIGREF20a and FIGREF20b represent that tail entities are at higher levels than head entities of the hierarchy;
Relations in Figures FIGREF20e and FIGREF20f represent that tail entities are at lower levels than head entities of the hierarchy.
As described in the model description section, we expect entities at higher levels of the hierarchy to have small moduli. The experiments validate our expectation. For both ModE and HAKE, most entries of the relations in the group (A) take values around one, which leads to that the head entities and tail entities have approximately the same moduli. In the group (B), most entries of the relations take values less than one, which results in that the head entities have smaller moduli than the tail entities. The cases in the group (C) are contrary to that in the group (B). These results show that our model can capture the semantic hierarchies in knowledge graphs. Moreover, compared with ModE, the relation embeddings' moduli of HAKE have lower variances, which shows that HAKE can model hierarchies more clearly.
As mentioned above, relations in the group (A) reflect the same semantic hierarchy, and are expected to have the moduli of about one. Obviously, it is hard to distinguish entities linked by these relations only using the modulus part. In Figure FIGREF22, we plot the phases of the relations in the group (A). The results show that the entities at the same level of the hierarchy can be distinguished by their phases, as many phases have the values of $\pi $.
Experiments and Analysis ::: Analysis on Entity Embeddings
In this part, to further show that HAKE can capture the semantic hierarchies between entities, we visualize the embeddings of several entity pairs.
We plot the entity embeddings of two models: the previous state-of-the-art RotatE and our proposed HAKE. RotatE regards each entity as a group of complex numbers. As a complex number can be seen as a point on a 2D plane, we can plot the entity embeddings on a 2D plane. As for HAKE, we have mentioned that it maps entities into the polar coordinate system. Therefore, we can also plot the entity embeddings generated by HAKE on a 2D plane based on their polar coordinates. For a fair comparison, we set $k=500$. That is, each plot contains 500 points, and the actual dimension of entity embeddings is 1000. Note that we use the logarithmic scale to better display the differences between entity embeddings. As all the moduli have values less than one, after applying the logarithm operation, the larger radii in the figures will actually represent smaller modulus.
Figure FIGREF29 shows the visualization results of three triples from the WN18RR dataset. Compared with the tail entities, the head entities in Figures FIGREF29a, FIGREF29b, and FIGREF29c are at lower levels, similar levels, higher levels in the semantic hierarchy, respectively. We can see that there exist clear concentric circles in the visualization results of HAKE, which demonstrates that HAKE can effectively model the semantic hierarchies. However, in RotatE, the entity embeddings in all three subfigures are mixed, making it hard to distinguish entities at different levels in the hierarchy.
Experiments and Analysis ::: Ablation Studies
In this part, we conduct ablation studies on the modulus part and the phase part of HAKE, as well as the mixture bias item. Table TABREF26 shows the results on three benchmark datasets.
We can see that the bias can improve the performance of HAKE on nearly all metrics. Specifically, the bias improves the H@1 score of $4.7\%$ on YAGO3-10 dataset, which illustrates the effectiveness of the bias.
We also observe that the modulus part of HAKE does not perform well on all datasets, due to its inability to distinguish the entities at the same level of the hierarchy. When only using the phase part, HAKE degenerates to the pRotatE model BIBREF7. It performs better than the modulus part, because it can well model entities at the same level of the hierarchy. However, our HAKE model significantly outperforms the modulus part and the phase part on all datasets, which demonstrates the importance to combine the two parts for modeling semantic hierarchies in knowledge graphs.
Experiments and Analysis ::: Comparison with Other Related Work
We compare our models with TKRL models BIBREF12, which also aim to model the hierarchy structures. For the difference between HAKE and TKRL, please refer to the Related Work section. Table TABREF27 shows the H@10 scores of HAKE and TKRLs on FB15k dataset. The best performance of TKRL is .734 obtained by the WHE+STC version, while the H@10 score of our HAKE model is .884. The results show that HAKE significantly outperforms TKRL, though it does not require additional information.
Conclusion
To model the semantic hierarchies in knowledge graphs, we propose a novel hierarchy-aware knowledge graph embedding model—HAKE—which maps entities into the polar coordinate system. Experiments show that our proposed HAKE significantly outperforms several existing state-of-the-art methods on benchmark datasets for the link prediction task. A further investigation shows that HAKE is capable of modeling entities at both different levels and the same levels in the semantic hierarchies.
Appendix
In this appendix, we will provide analysis on relation patterns, negative entity embeddings, and moduli of entity embeddings. Then, we will give more visualization results on semantic hierarchies.
A. Analysis on Relation Patterns
In this section, we prove that our HAKE model can infer the (anti)symmetry, inversion and composition relation patterns. Detailed propositions and their proofs are as follows.
Proposition 1 HAKE can infer the (anti)symmetry pattern.
If $r(x, y)$ and $r(y, x)$ hold, we have
Then we have
Otherwise, if $r(x, y)$ and $\lnot r(y, x)$ hold, we have
Proposition 2 HAKE can infer the inversion pattern.
If $r_1(x, y)$ and $r_2(y, x)$ hold, we have
Then, we have
Proposition 3 HAKE can infer the composition pattern.
If $r_1(x, z)$, $r_2(x, y)$ and $r_3(y, z)$ hold, we have
Then we have
B. Analysis on Negative Entity Embeddings
We denote the linked entity pairs as the set of entity pairs linked by some relation, and denote the unlinked entity pairs as the set of entity pairs that no triple contains in the train/valid/test dataset. It is worth noting that the unlinked paris may contain valid triples, as the knowledge graph is incomplete. For both the linked and the unlinked entity pairs, we count the embedding entries of two entities that have different signs. Figure FIGREF34 shows the result.
For the linked entity pairs, as we expected, most of the entries have the same sign. Due to the large amount of unlinked entity pairs, we randomly sample a part of them for plotting. For the unlinked entity pairs, around half of the entries have different signs, which is consistent with the random initialization. The results support our hypothesis that the negative signs of entity embeddings can help our model to distinguish positive and negative triples.
C. Analysis on Moduli of Entity Embeddings
Figure FIGREF37 shows the modulus of entity embeddings. We can observe that RotatE encourages the modulus of embeddings to be the same, as the relations are modeled as rotations in a complex space. Compared with RotatE, the modulus of entity embeddings in HAKE are more dispersed, making it to have more potential to model the semantic hierarchies.
D. More Results on Semantic Hierarchies
In this part, we visualize more triples from WN18RR. We plot the head and tail entities on 2D planes using the same method as that in the main text. The visualization results are in Figure FIGREF41, where the subcaptions demonstrate the corresponding triples. The figures show that, compared with RotatE, our HAKE model can better model the entities both in different hierarchies and in the same hierarchy. | radial coordinate and the angular coordinates correspond to the modulus part and the phase part, respectively |
db9021ddd4593f6fadf172710468e2fdcea99674 | db9021ddd4593f6fadf172710468e2fdcea99674_0 | Q: What additional techniques are incorporated?
Text: Introduction
Removing computer-human language barrier is an inevitable advancement researchers are thriving to achieve for decades. One of the stages of this advancement will be coding through natural human language instead of traditional programming language. On naturalness of computer programming D. Knuth said, “Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.”BIBREF0. Unfortunately, learning programming language is still necessary to instruct it. Researchers and developers are working to overcome this human-machine language barrier. Multiple branches exists to solve this challenge (i.e. inter-conversion of different programming language to have universally connected programming languages). Automatic code generation through natural language is not a new concept in computer science studies. However, it is difficult to create such tool due to these following three reasons–
Programming languages are diverse
An individual person expresses logical statements differently than other
Natural Language Processing (NLP) of programming statements is challenging since both human and programming language evolve over time
In this paper, a neural approach to translate pseudo-code or algorithm like human language expression into programming language code is proposed.
Problem Description
Code repositories (i.e. Git, SVN) flourished in the last decade producing big data of code allowing data scientists to perform machine learning on these data. In 2017, Allamanis M et al. published a survey in which they presented the state-of-the-art of the research areas where machine learning is changing the way programmers code during software engineering and development process BIBREF1. This paper discusses what are the restricting factors of developing such text-to-code conversion method and what problems need to be solved–
Problem Description ::: Programming Language Diversity
According to the sources, there are more than a thousand actively maintained programming languages, which signifies the diversity of these language . These languages were created to achieve different purpose and use different syntaxes. Low-level languages such as assembly languages are easier to express in human language because of the low or no abstraction at all whereas high-level, or Object-Oriented Programing (OOP) languages are more diversified in syntax and expression, which is challenging to bring into a unified human language structure. Nonetheless, portability and transparency between different programming languages also remains a challenge and an open research area. George D. et al. tried to overcome this problem through XML mapping BIBREF2. They tried to convert codes from C++ to Java using XML mapping as an intermediate language. However, the authors encountered challenges to support different features of both languages.
Problem Description ::: Human Language Factor
One of the motivations behind this paper is - as long as it is about programming, there is a finite and small set of expression which is used in human vocabulary. For instance, programmers express a for-loop in a very few specific ways BIBREF3. Variable declaration and value assignment expressions are also limited in nature. Although all codes are executable, human representation through text may not due to the semantic brittleness of code. Since high-level languages have a wide range of syntax, programmers use different linguistic expressions to explain those. For instance, small changes like swapping function arguments can significantly change the meaning of the code. Hence the challenge remains in processing human language to understand it properly which brings us to the next problem-
Problem Description ::: NLP of statements
Although there is a finite set of expressions for each programming statements, it is a challenge to extract information from the statements of the code accurately. Semantic analysis of linguistic expression plays an important role in this information extraction. For instance, in case of a loop, what is the initial value? What is the step value? When will the loop terminate?
Mihalcea R. et al. has achieved a variable success rate of 70-80% in producing code just from the problem statement expressed in human natural language BIBREF3. They focused solely on the detection of step and loops in their research. Another research group from MIT, Lei et al. use a semantic learning model for text to detect the inputs. The model produces a parser in C++ which can successfully parse more than 70% of the textual description of input BIBREF4. The test dataset and model was initially tested and targeted against ACM-ICPC participantsínputs which contains diverse and sometimes complex input instructions.
A recent survey from Allamanis M. et al. presented the state-of-the-art on the area of naturalness of programming BIBREF1. A number of research works have been conducted on text-to-code or code-to-text area in recent years. In 2015, Oda et al. proposed a way to translate each line of Python code into natural language pseudocode using Statistical Machine Learning Technique (SMT) framework BIBREF5 was used. This translation framework was able to - it can successfully translate the code to natural language pseudo coded text in both English and Japanese. In the same year, Chris Q. et al. mapped natural language with simple if-this-then-that logical rules BIBREF6. Tihomir G. and Viktor K. developed an Integrated Development Environment (IDE) integrated code assistant tool anyCode for Java which can search, import and call function just by typing desired functionality through text BIBREF7. They have used model and mapping framework between function signatures and utilized resources like WordNet, Java Corpus, relational mapping to process text online and offline.
Recently in 2017, P. Yin and G. Neubig proposed a semantic parser which generates code through its neural model BIBREF8. They formulated a grammatical model which works as a skeleton for neural network training. The grammatical rules are defined based on the various generalized structure of the statements in the programming language.
Proposed Methodology
The use of machine learning techniques such as SMT proved to be at most 75% successful in converting human text to executable code. BIBREF9. A programming language is just like a language with less vocabulary compared to a typical human language. For instance, the code vocabulary of the training dataset was 8814 (including variable, function, class names), whereas the English vocabulary to express the same code was 13659 in total. Here, programming language is considered just like another human language and widely used SMT techniques have been applied.
Proposed Methodology ::: Statistical Machine Translation
SMT techniques are widely used in Natural Language Processing (NLP). SMT plays a significant role in translation from one language to another, especially in lexical and grammatical rule extraction. In SMT, bilingual grammatical structures are automatically formed by statistical approaches instead of explicitly providing a grammatical model. This reduces months and years of work which requires significant collaboration between bi-lingual linguistics. Here, a neural network based machine translation model is used to translate regular text into programming code.
Proposed Methodology ::: Statistical Machine Translation ::: Data Preparation
SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language.
Proposed Methodology ::: Statistical Machine Translation ::: Vocabulary Generation
To train the neural model, the texts should be converted to a computational entity. To do that, two separate vocabulary files are created - one for the source texts and another for the code. Vocabulary generation is done by tokenization of words. Afterwards, the words are put into their contextual vector space using the popular word2vec BIBREF10 method to make the words computational.
Proposed Methodology ::: Statistical Machine Translation ::: Neural Model Training
In order to train the translation model between text-to-code an open source Neural Machine Translation (NMT) - OpenNMT implementation is utilized BIBREF11. PyTorch is used as Neural Network coding framework. For training, three types of Recurrent Neural Network (RNN) layers are used – an encoder layer, a decoder layer and an output layer. These layers together form a LSTM model. LSTM is typically used in seq2seq translation.
In Fig. FIGREF13, the neural model architecture is demonstrated. The diagram shows how it takes the source and target text as input and uses it for training. Vector representation of tokenized source and target text are fed into the model. Each token of the source text is passed into an encoder cell. Target text tokens are passed into a decoder cell. Encoder cells are part of the encoder RNN layer and decoder cells are part of the decoder RNN layer. End of the input sequence is marked by a $<$eos$>$ token. Upon getting the $<$eos$>$ token, the final cell state of encoder layer initiate the output layer sequence. At each target cell state, attention is applied with the encoder RNN state and combined with the current hidden state to produce the prediction of next target token. This predictions are then fed back to the target RNN. Attention mechanism helps us to overcome the fixed length restriction of encoder-decoder sequence and allows us to process variable length between input and output sequence. Attention uses encoder state and pass it to the decoder cell to give particular attention to the start of an output layer sequence. The encoder uses an initial state to tell the decoder what it is supposed to generate. Effectively, the decoder learns to generate target tokens, conditioned on the input sequence. Sigmoidal optimization is used to optimize the prediction.
Result Analysis
Training parallel corpus had 18805 lines of annotated code in it. The training model is executed several times with different training parameters. During the final training process, 500 validation data is used to generate the recurrent neural model, which is 3% of the training data. We run the training with epoch value of 10 with a batch size of 64. After finishing the training, the accuracy of the generated model using validation data from the source corpus was 74.40% (Fig. FIGREF17).
Although the generated code is incoherent and often predict wrong code token, this is expected because of the limited amount of training data. LSTM generally requires a more extensive set of data (100k+ in such scenario) to build a more accurate model. The incoherence can be resolved by incorporating coding syntax tree model in future. For instance–
"define the method tzname with 2 arguments: self and dt."
is translated into–
def __init__ ( self , regex ) :.
The translator is successfully generating the whole codeline automatically but missing the noun part (parameter and function name) part of the syntax.
Conclusion & Future Works
The main advantage of translating to a programming language is - it has a concrete and strict lexical and grammatical structure which human languages lack. The aim of this paper was to make the text-to-code framework work for general purpose programming language, primarily Python. In later phase, phrase-based word embedding can be incorporated for improved vocabulary mapping. To get more accurate target code for each line, Abstract Syntax Tree(AST) can be beneficial.
The contribution of this research is a machine learning model which can turn the human expression into coding expressions. This paper also discusses available methods which convert natural language to programming language successfully in fixed or tightly bounded linguistic paradigm. Approaching this problem using machine learning will give us the opportunity to explore the possibility of unified programming interface as well in the future.
Acknowledgment
We would like to thank Dr. Khandaker Tabin Hasan, Head of the Depertment of Computer Science, American International University-Bangladesh for his inspiration and encouragement in all of our research works. Also, thanks to Future Technology Conference - 2019 committee for partially supporting us to join the conference and one of our colleague - Faheem Abrar, Software Developer for his thorough review and comments on this research work and supporting us by providing fund. | Unanswerable |
db9021ddd4593f6fadf172710468e2fdcea99674 | db9021ddd4593f6fadf172710468e2fdcea99674_1 | Q: What additional techniques are incorporated?
Text: Introduction
Removing computer-human language barrier is an inevitable advancement researchers are thriving to achieve for decades. One of the stages of this advancement will be coding through natural human language instead of traditional programming language. On naturalness of computer programming D. Knuth said, “Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.”BIBREF0. Unfortunately, learning programming language is still necessary to instruct it. Researchers and developers are working to overcome this human-machine language barrier. Multiple branches exists to solve this challenge (i.e. inter-conversion of different programming language to have universally connected programming languages). Automatic code generation through natural language is not a new concept in computer science studies. However, it is difficult to create such tool due to these following three reasons–
Programming languages are diverse
An individual person expresses logical statements differently than other
Natural Language Processing (NLP) of programming statements is challenging since both human and programming language evolve over time
In this paper, a neural approach to translate pseudo-code or algorithm like human language expression into programming language code is proposed.
Problem Description
Code repositories (i.e. Git, SVN) flourished in the last decade producing big data of code allowing data scientists to perform machine learning on these data. In 2017, Allamanis M et al. published a survey in which they presented the state-of-the-art of the research areas where machine learning is changing the way programmers code during software engineering and development process BIBREF1. This paper discusses what are the restricting factors of developing such text-to-code conversion method and what problems need to be solved–
Problem Description ::: Programming Language Diversity
According to the sources, there are more than a thousand actively maintained programming languages, which signifies the diversity of these language . These languages were created to achieve different purpose and use different syntaxes. Low-level languages such as assembly languages are easier to express in human language because of the low or no abstraction at all whereas high-level, or Object-Oriented Programing (OOP) languages are more diversified in syntax and expression, which is challenging to bring into a unified human language structure. Nonetheless, portability and transparency between different programming languages also remains a challenge and an open research area. George D. et al. tried to overcome this problem through XML mapping BIBREF2. They tried to convert codes from C++ to Java using XML mapping as an intermediate language. However, the authors encountered challenges to support different features of both languages.
Problem Description ::: Human Language Factor
One of the motivations behind this paper is - as long as it is about programming, there is a finite and small set of expression which is used in human vocabulary. For instance, programmers express a for-loop in a very few specific ways BIBREF3. Variable declaration and value assignment expressions are also limited in nature. Although all codes are executable, human representation through text may not due to the semantic brittleness of code. Since high-level languages have a wide range of syntax, programmers use different linguistic expressions to explain those. For instance, small changes like swapping function arguments can significantly change the meaning of the code. Hence the challenge remains in processing human language to understand it properly which brings us to the next problem-
Problem Description ::: NLP of statements
Although there is a finite set of expressions for each programming statements, it is a challenge to extract information from the statements of the code accurately. Semantic analysis of linguistic expression plays an important role in this information extraction. For instance, in case of a loop, what is the initial value? What is the step value? When will the loop terminate?
Mihalcea R. et al. has achieved a variable success rate of 70-80% in producing code just from the problem statement expressed in human natural language BIBREF3. They focused solely on the detection of step and loops in their research. Another research group from MIT, Lei et al. use a semantic learning model for text to detect the inputs. The model produces a parser in C++ which can successfully parse more than 70% of the textual description of input BIBREF4. The test dataset and model was initially tested and targeted against ACM-ICPC participantsínputs which contains diverse and sometimes complex input instructions.
A recent survey from Allamanis M. et al. presented the state-of-the-art on the area of naturalness of programming BIBREF1. A number of research works have been conducted on text-to-code or code-to-text area in recent years. In 2015, Oda et al. proposed a way to translate each line of Python code into natural language pseudocode using Statistical Machine Learning Technique (SMT) framework BIBREF5 was used. This translation framework was able to - it can successfully translate the code to natural language pseudo coded text in both English and Japanese. In the same year, Chris Q. et al. mapped natural language with simple if-this-then-that logical rules BIBREF6. Tihomir G. and Viktor K. developed an Integrated Development Environment (IDE) integrated code assistant tool anyCode for Java which can search, import and call function just by typing desired functionality through text BIBREF7. They have used model and mapping framework between function signatures and utilized resources like WordNet, Java Corpus, relational mapping to process text online and offline.
Recently in 2017, P. Yin and G. Neubig proposed a semantic parser which generates code through its neural model BIBREF8. They formulated a grammatical model which works as a skeleton for neural network training. The grammatical rules are defined based on the various generalized structure of the statements in the programming language.
Proposed Methodology
The use of machine learning techniques such as SMT proved to be at most 75% successful in converting human text to executable code. BIBREF9. A programming language is just like a language with less vocabulary compared to a typical human language. For instance, the code vocabulary of the training dataset was 8814 (including variable, function, class names), whereas the English vocabulary to express the same code was 13659 in total. Here, programming language is considered just like another human language and widely used SMT techniques have been applied.
Proposed Methodology ::: Statistical Machine Translation
SMT techniques are widely used in Natural Language Processing (NLP). SMT plays a significant role in translation from one language to another, especially in lexical and grammatical rule extraction. In SMT, bilingual grammatical structures are automatically formed by statistical approaches instead of explicitly providing a grammatical model. This reduces months and years of work which requires significant collaboration between bi-lingual linguistics. Here, a neural network based machine translation model is used to translate regular text into programming code.
Proposed Methodology ::: Statistical Machine Translation ::: Data Preparation
SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language.
Proposed Methodology ::: Statistical Machine Translation ::: Vocabulary Generation
To train the neural model, the texts should be converted to a computational entity. To do that, two separate vocabulary files are created - one for the source texts and another for the code. Vocabulary generation is done by tokenization of words. Afterwards, the words are put into their contextual vector space using the popular word2vec BIBREF10 method to make the words computational.
Proposed Methodology ::: Statistical Machine Translation ::: Neural Model Training
In order to train the translation model between text-to-code an open source Neural Machine Translation (NMT) - OpenNMT implementation is utilized BIBREF11. PyTorch is used as Neural Network coding framework. For training, three types of Recurrent Neural Network (RNN) layers are used – an encoder layer, a decoder layer and an output layer. These layers together form a LSTM model. LSTM is typically used in seq2seq translation.
In Fig. FIGREF13, the neural model architecture is demonstrated. The diagram shows how it takes the source and target text as input and uses it for training. Vector representation of tokenized source and target text are fed into the model. Each token of the source text is passed into an encoder cell. Target text tokens are passed into a decoder cell. Encoder cells are part of the encoder RNN layer and decoder cells are part of the decoder RNN layer. End of the input sequence is marked by a $<$eos$>$ token. Upon getting the $<$eos$>$ token, the final cell state of encoder layer initiate the output layer sequence. At each target cell state, attention is applied with the encoder RNN state and combined with the current hidden state to produce the prediction of next target token. This predictions are then fed back to the target RNN. Attention mechanism helps us to overcome the fixed length restriction of encoder-decoder sequence and allows us to process variable length between input and output sequence. Attention uses encoder state and pass it to the decoder cell to give particular attention to the start of an output layer sequence. The encoder uses an initial state to tell the decoder what it is supposed to generate. Effectively, the decoder learns to generate target tokens, conditioned on the input sequence. Sigmoidal optimization is used to optimize the prediction.
Result Analysis
Training parallel corpus had 18805 lines of annotated code in it. The training model is executed several times with different training parameters. During the final training process, 500 validation data is used to generate the recurrent neural model, which is 3% of the training data. We run the training with epoch value of 10 with a batch size of 64. After finishing the training, the accuracy of the generated model using validation data from the source corpus was 74.40% (Fig. FIGREF17).
Although the generated code is incoherent and often predict wrong code token, this is expected because of the limited amount of training data. LSTM generally requires a more extensive set of data (100k+ in such scenario) to build a more accurate model. The incoherence can be resolved by incorporating coding syntax tree model in future. For instance–
"define the method tzname with 2 arguments: self and dt."
is translated into–
def __init__ ( self , regex ) :.
The translator is successfully generating the whole codeline automatically but missing the noun part (parameter and function name) part of the syntax.
Conclusion & Future Works
The main advantage of translating to a programming language is - it has a concrete and strict lexical and grammatical structure which human languages lack. The aim of this paper was to make the text-to-code framework work for general purpose programming language, primarily Python. In later phase, phrase-based word embedding can be incorporated for improved vocabulary mapping. To get more accurate target code for each line, Abstract Syntax Tree(AST) can be beneficial.
The contribution of this research is a machine learning model which can turn the human expression into coding expressions. This paper also discusses available methods which convert natural language to programming language successfully in fixed or tightly bounded linguistic paradigm. Approaching this problem using machine learning will give us the opportunity to explore the possibility of unified programming interface as well in the future.
Acknowledgment
We would like to thank Dr. Khandaker Tabin Hasan, Head of the Depertment of Computer Science, American International University-Bangladesh for his inspiration and encouragement in all of our research works. Also, thanks to Future Technology Conference - 2019 committee for partially supporting us to join the conference and one of our colleague - Faheem Abrar, Software Developer for his thorough review and comments on this research work and supporting us by providing fund. | incorporating coding syntax tree model |
8ea4bd4c1d8a466da386d16e4844ea932c44a412 | 8ea4bd4c1d8a466da386d16e4844ea932c44a412_0 | Q: What dataset do they use?
Text: Introduction
Removing computer-human language barrier is an inevitable advancement researchers are thriving to achieve for decades. One of the stages of this advancement will be coding through natural human language instead of traditional programming language. On naturalness of computer programming D. Knuth said, “Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.”BIBREF0. Unfortunately, learning programming language is still necessary to instruct it. Researchers and developers are working to overcome this human-machine language barrier. Multiple branches exists to solve this challenge (i.e. inter-conversion of different programming language to have universally connected programming languages). Automatic code generation through natural language is not a new concept in computer science studies. However, it is difficult to create such tool due to these following three reasons–
Programming languages are diverse
An individual person expresses logical statements differently than other
Natural Language Processing (NLP) of programming statements is challenging since both human and programming language evolve over time
In this paper, a neural approach to translate pseudo-code or algorithm like human language expression into programming language code is proposed.
Problem Description
Code repositories (i.e. Git, SVN) flourished in the last decade producing big data of code allowing data scientists to perform machine learning on these data. In 2017, Allamanis M et al. published a survey in which they presented the state-of-the-art of the research areas where machine learning is changing the way programmers code during software engineering and development process BIBREF1. This paper discusses what are the restricting factors of developing such text-to-code conversion method and what problems need to be solved–
Problem Description ::: Programming Language Diversity
According to the sources, there are more than a thousand actively maintained programming languages, which signifies the diversity of these language . These languages were created to achieve different purpose and use different syntaxes. Low-level languages such as assembly languages are easier to express in human language because of the low or no abstraction at all whereas high-level, or Object-Oriented Programing (OOP) languages are more diversified in syntax and expression, which is challenging to bring into a unified human language structure. Nonetheless, portability and transparency between different programming languages also remains a challenge and an open research area. George D. et al. tried to overcome this problem through XML mapping BIBREF2. They tried to convert codes from C++ to Java using XML mapping as an intermediate language. However, the authors encountered challenges to support different features of both languages.
Problem Description ::: Human Language Factor
One of the motivations behind this paper is - as long as it is about programming, there is a finite and small set of expression which is used in human vocabulary. For instance, programmers express a for-loop in a very few specific ways BIBREF3. Variable declaration and value assignment expressions are also limited in nature. Although all codes are executable, human representation through text may not due to the semantic brittleness of code. Since high-level languages have a wide range of syntax, programmers use different linguistic expressions to explain those. For instance, small changes like swapping function arguments can significantly change the meaning of the code. Hence the challenge remains in processing human language to understand it properly which brings us to the next problem-
Problem Description ::: NLP of statements
Although there is a finite set of expressions for each programming statements, it is a challenge to extract information from the statements of the code accurately. Semantic analysis of linguistic expression plays an important role in this information extraction. For instance, in case of a loop, what is the initial value? What is the step value? When will the loop terminate?
Mihalcea R. et al. has achieved a variable success rate of 70-80% in producing code just from the problem statement expressed in human natural language BIBREF3. They focused solely on the detection of step and loops in their research. Another research group from MIT, Lei et al. use a semantic learning model for text to detect the inputs. The model produces a parser in C++ which can successfully parse more than 70% of the textual description of input BIBREF4. The test dataset and model was initially tested and targeted against ACM-ICPC participantsínputs which contains diverse and sometimes complex input instructions.
A recent survey from Allamanis M. et al. presented the state-of-the-art on the area of naturalness of programming BIBREF1. A number of research works have been conducted on text-to-code or code-to-text area in recent years. In 2015, Oda et al. proposed a way to translate each line of Python code into natural language pseudocode using Statistical Machine Learning Technique (SMT) framework BIBREF5 was used. This translation framework was able to - it can successfully translate the code to natural language pseudo coded text in both English and Japanese. In the same year, Chris Q. et al. mapped natural language with simple if-this-then-that logical rules BIBREF6. Tihomir G. and Viktor K. developed an Integrated Development Environment (IDE) integrated code assistant tool anyCode for Java which can search, import and call function just by typing desired functionality through text BIBREF7. They have used model and mapping framework between function signatures and utilized resources like WordNet, Java Corpus, relational mapping to process text online and offline.
Recently in 2017, P. Yin and G. Neubig proposed a semantic parser which generates code through its neural model BIBREF8. They formulated a grammatical model which works as a skeleton for neural network training. The grammatical rules are defined based on the various generalized structure of the statements in the programming language.
Proposed Methodology
The use of machine learning techniques such as SMT proved to be at most 75% successful in converting human text to executable code. BIBREF9. A programming language is just like a language with less vocabulary compared to a typical human language. For instance, the code vocabulary of the training dataset was 8814 (including variable, function, class names), whereas the English vocabulary to express the same code was 13659 in total. Here, programming language is considered just like another human language and widely used SMT techniques have been applied.
Proposed Methodology ::: Statistical Machine Translation
SMT techniques are widely used in Natural Language Processing (NLP). SMT plays a significant role in translation from one language to another, especially in lexical and grammatical rule extraction. In SMT, bilingual grammatical structures are automatically formed by statistical approaches instead of explicitly providing a grammatical model. This reduces months and years of work which requires significant collaboration between bi-lingual linguistics. Here, a neural network based machine translation model is used to translate regular text into programming code.
Proposed Methodology ::: Statistical Machine Translation ::: Data Preparation
SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language.
Proposed Methodology ::: Statistical Machine Translation ::: Vocabulary Generation
To train the neural model, the texts should be converted to a computational entity. To do that, two separate vocabulary files are created - one for the source texts and another for the code. Vocabulary generation is done by tokenization of words. Afterwards, the words are put into their contextual vector space using the popular word2vec BIBREF10 method to make the words computational.
Proposed Methodology ::: Statistical Machine Translation ::: Neural Model Training
In order to train the translation model between text-to-code an open source Neural Machine Translation (NMT) - OpenNMT implementation is utilized BIBREF11. PyTorch is used as Neural Network coding framework. For training, three types of Recurrent Neural Network (RNN) layers are used – an encoder layer, a decoder layer and an output layer. These layers together form a LSTM model. LSTM is typically used in seq2seq translation.
In Fig. FIGREF13, the neural model architecture is demonstrated. The diagram shows how it takes the source and target text as input and uses it for training. Vector representation of tokenized source and target text are fed into the model. Each token of the source text is passed into an encoder cell. Target text tokens are passed into a decoder cell. Encoder cells are part of the encoder RNN layer and decoder cells are part of the decoder RNN layer. End of the input sequence is marked by a $<$eos$>$ token. Upon getting the $<$eos$>$ token, the final cell state of encoder layer initiate the output layer sequence. At each target cell state, attention is applied with the encoder RNN state and combined with the current hidden state to produce the prediction of next target token. This predictions are then fed back to the target RNN. Attention mechanism helps us to overcome the fixed length restriction of encoder-decoder sequence and allows us to process variable length between input and output sequence. Attention uses encoder state and pass it to the decoder cell to give particular attention to the start of an output layer sequence. The encoder uses an initial state to tell the decoder what it is supposed to generate. Effectively, the decoder learns to generate target tokens, conditioned on the input sequence. Sigmoidal optimization is used to optimize the prediction.
Result Analysis
Training parallel corpus had 18805 lines of annotated code in it. The training model is executed several times with different training parameters. During the final training process, 500 validation data is used to generate the recurrent neural model, which is 3% of the training data. We run the training with epoch value of 10 with a batch size of 64. After finishing the training, the accuracy of the generated model using validation data from the source corpus was 74.40% (Fig. FIGREF17).
Although the generated code is incoherent and often predict wrong code token, this is expected because of the limited amount of training data. LSTM generally requires a more extensive set of data (100k+ in such scenario) to build a more accurate model. The incoherence can be resolved by incorporating coding syntax tree model in future. For instance–
"define the method tzname with 2 arguments: self and dt."
is translated into–
def __init__ ( self , regex ) :.
The translator is successfully generating the whole codeline automatically but missing the noun part (parameter and function name) part of the syntax.
Conclusion & Future Works
The main advantage of translating to a programming language is - it has a concrete and strict lexical and grammatical structure which human languages lack. The aim of this paper was to make the text-to-code framework work for general purpose programming language, primarily Python. In later phase, phrase-based word embedding can be incorporated for improved vocabulary mapping. To get more accurate target code for each line, Abstract Syntax Tree(AST) can be beneficial.
The contribution of this research is a machine learning model which can turn the human expression into coding expressions. This paper also discusses available methods which convert natural language to programming language successfully in fixed or tightly bounded linguistic paradigm. Approaching this problem using machine learning will give us the opportunity to explore the possibility of unified programming interface as well in the future.
Acknowledgment
We would like to thank Dr. Khandaker Tabin Hasan, Head of the Depertment of Computer Science, American International University-Bangladesh for his inspiration and encouragement in all of our research works. Also, thanks to Future Technology Conference - 2019 committee for partially supporting us to join the conference and one of our colleague - Faheem Abrar, Software Developer for his thorough review and comments on this research work and supporting us by providing fund. | A parallel corpus where the source is an English expression of code and the target is Python code. |
8ea4bd4c1d8a466da386d16e4844ea932c44a412 | 8ea4bd4c1d8a466da386d16e4844ea932c44a412_1 | Q: What dataset do they use?
Text: Introduction
Removing computer-human language barrier is an inevitable advancement researchers are thriving to achieve for decades. One of the stages of this advancement will be coding through natural human language instead of traditional programming language. On naturalness of computer programming D. Knuth said, “Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.”BIBREF0. Unfortunately, learning programming language is still necessary to instruct it. Researchers and developers are working to overcome this human-machine language barrier. Multiple branches exists to solve this challenge (i.e. inter-conversion of different programming language to have universally connected programming languages). Automatic code generation through natural language is not a new concept in computer science studies. However, it is difficult to create such tool due to these following three reasons–
Programming languages are diverse
An individual person expresses logical statements differently than other
Natural Language Processing (NLP) of programming statements is challenging since both human and programming language evolve over time
In this paper, a neural approach to translate pseudo-code or algorithm like human language expression into programming language code is proposed.
Problem Description
Code repositories (i.e. Git, SVN) flourished in the last decade producing big data of code allowing data scientists to perform machine learning on these data. In 2017, Allamanis M et al. published a survey in which they presented the state-of-the-art of the research areas where machine learning is changing the way programmers code during software engineering and development process BIBREF1. This paper discusses what are the restricting factors of developing such text-to-code conversion method and what problems need to be solved–
Problem Description ::: Programming Language Diversity
According to the sources, there are more than a thousand actively maintained programming languages, which signifies the diversity of these language . These languages were created to achieve different purpose and use different syntaxes. Low-level languages such as assembly languages are easier to express in human language because of the low or no abstraction at all whereas high-level, or Object-Oriented Programing (OOP) languages are more diversified in syntax and expression, which is challenging to bring into a unified human language structure. Nonetheless, portability and transparency between different programming languages also remains a challenge and an open research area. George D. et al. tried to overcome this problem through XML mapping BIBREF2. They tried to convert codes from C++ to Java using XML mapping as an intermediate language. However, the authors encountered challenges to support different features of both languages.
Problem Description ::: Human Language Factor
One of the motivations behind this paper is - as long as it is about programming, there is a finite and small set of expression which is used in human vocabulary. For instance, programmers express a for-loop in a very few specific ways BIBREF3. Variable declaration and value assignment expressions are also limited in nature. Although all codes are executable, human representation through text may not due to the semantic brittleness of code. Since high-level languages have a wide range of syntax, programmers use different linguistic expressions to explain those. For instance, small changes like swapping function arguments can significantly change the meaning of the code. Hence the challenge remains in processing human language to understand it properly which brings us to the next problem-
Problem Description ::: NLP of statements
Although there is a finite set of expressions for each programming statements, it is a challenge to extract information from the statements of the code accurately. Semantic analysis of linguistic expression plays an important role in this information extraction. For instance, in case of a loop, what is the initial value? What is the step value? When will the loop terminate?
Mihalcea R. et al. has achieved a variable success rate of 70-80% in producing code just from the problem statement expressed in human natural language BIBREF3. They focused solely on the detection of step and loops in their research. Another research group from MIT, Lei et al. use a semantic learning model for text to detect the inputs. The model produces a parser in C++ which can successfully parse more than 70% of the textual description of input BIBREF4. The test dataset and model was initially tested and targeted against ACM-ICPC participantsínputs which contains diverse and sometimes complex input instructions.
A recent survey from Allamanis M. et al. presented the state-of-the-art on the area of naturalness of programming BIBREF1. A number of research works have been conducted on text-to-code or code-to-text area in recent years. In 2015, Oda et al. proposed a way to translate each line of Python code into natural language pseudocode using Statistical Machine Learning Technique (SMT) framework BIBREF5 was used. This translation framework was able to - it can successfully translate the code to natural language pseudo coded text in both English and Japanese. In the same year, Chris Q. et al. mapped natural language with simple if-this-then-that logical rules BIBREF6. Tihomir G. and Viktor K. developed an Integrated Development Environment (IDE) integrated code assistant tool anyCode for Java which can search, import and call function just by typing desired functionality through text BIBREF7. They have used model and mapping framework between function signatures and utilized resources like WordNet, Java Corpus, relational mapping to process text online and offline.
Recently in 2017, P. Yin and G. Neubig proposed a semantic parser which generates code through its neural model BIBREF8. They formulated a grammatical model which works as a skeleton for neural network training. The grammatical rules are defined based on the various generalized structure of the statements in the programming language.
Proposed Methodology
The use of machine learning techniques such as SMT proved to be at most 75% successful in converting human text to executable code. BIBREF9. A programming language is just like a language with less vocabulary compared to a typical human language. For instance, the code vocabulary of the training dataset was 8814 (including variable, function, class names), whereas the English vocabulary to express the same code was 13659 in total. Here, programming language is considered just like another human language and widely used SMT techniques have been applied.
Proposed Methodology ::: Statistical Machine Translation
SMT techniques are widely used in Natural Language Processing (NLP). SMT plays a significant role in translation from one language to another, especially in lexical and grammatical rule extraction. In SMT, bilingual grammatical structures are automatically formed by statistical approaches instead of explicitly providing a grammatical model. This reduces months and years of work which requires significant collaboration between bi-lingual linguistics. Here, a neural network based machine translation model is used to translate regular text into programming code.
Proposed Methodology ::: Statistical Machine Translation ::: Data Preparation
SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language.
Proposed Methodology ::: Statistical Machine Translation ::: Vocabulary Generation
To train the neural model, the texts should be converted to a computational entity. To do that, two separate vocabulary files are created - one for the source texts and another for the code. Vocabulary generation is done by tokenization of words. Afterwards, the words are put into their contextual vector space using the popular word2vec BIBREF10 method to make the words computational.
Proposed Methodology ::: Statistical Machine Translation ::: Neural Model Training
In order to train the translation model between text-to-code an open source Neural Machine Translation (NMT) - OpenNMT implementation is utilized BIBREF11. PyTorch is used as Neural Network coding framework. For training, three types of Recurrent Neural Network (RNN) layers are used – an encoder layer, a decoder layer and an output layer. These layers together form a LSTM model. LSTM is typically used in seq2seq translation.
In Fig. FIGREF13, the neural model architecture is demonstrated. The diagram shows how it takes the source and target text as input and uses it for training. Vector representation of tokenized source and target text are fed into the model. Each token of the source text is passed into an encoder cell. Target text tokens are passed into a decoder cell. Encoder cells are part of the encoder RNN layer and decoder cells are part of the decoder RNN layer. End of the input sequence is marked by a $<$eos$>$ token. Upon getting the $<$eos$>$ token, the final cell state of encoder layer initiate the output layer sequence. At each target cell state, attention is applied with the encoder RNN state and combined with the current hidden state to produce the prediction of next target token. This predictions are then fed back to the target RNN. Attention mechanism helps us to overcome the fixed length restriction of encoder-decoder sequence and allows us to process variable length between input and output sequence. Attention uses encoder state and pass it to the decoder cell to give particular attention to the start of an output layer sequence. The encoder uses an initial state to tell the decoder what it is supposed to generate. Effectively, the decoder learns to generate target tokens, conditioned on the input sequence. Sigmoidal optimization is used to optimize the prediction.
Result Analysis
Training parallel corpus had 18805 lines of annotated code in it. The training model is executed several times with different training parameters. During the final training process, 500 validation data is used to generate the recurrent neural model, which is 3% of the training data. We run the training with epoch value of 10 with a batch size of 64. After finishing the training, the accuracy of the generated model using validation data from the source corpus was 74.40% (Fig. FIGREF17).
Although the generated code is incoherent and often predict wrong code token, this is expected because of the limited amount of training data. LSTM generally requires a more extensive set of data (100k+ in such scenario) to build a more accurate model. The incoherence can be resolved by incorporating coding syntax tree model in future. For instance–
"define the method tzname with 2 arguments: self and dt."
is translated into–
def __init__ ( self , regex ) :.
The translator is successfully generating the whole codeline automatically but missing the noun part (parameter and function name) part of the syntax.
Conclusion & Future Works
The main advantage of translating to a programming language is - it has a concrete and strict lexical and grammatical structure which human languages lack. The aim of this paper was to make the text-to-code framework work for general purpose programming language, primarily Python. In later phase, phrase-based word embedding can be incorporated for improved vocabulary mapping. To get more accurate target code for each line, Abstract Syntax Tree(AST) can be beneficial.
The contribution of this research is a machine learning model which can turn the human expression into coding expressions. This paper also discusses available methods which convert natural language to programming language successfully in fixed or tightly bounded linguistic paradigm. Approaching this problem using machine learning will give us the opportunity to explore the possibility of unified programming interface as well in the future.
Acknowledgment
We would like to thank Dr. Khandaker Tabin Hasan, Head of the Depertment of Computer Science, American International University-Bangladesh for his inspiration and encouragement in all of our research works. Also, thanks to Future Technology Conference - 2019 committee for partially supporting us to join the conference and one of our colleague - Faheem Abrar, Software Developer for his thorough review and comments on this research work and supporting us by providing fund. | text-code parallel corpus |
92240eeab107a4f636705b88f00cefc4f0782846 | 92240eeab107a4f636705b88f00cefc4f0782846_0 | Q: Do they compare to other models?
Text: Introduction
Removing computer-human language barrier is an inevitable advancement researchers are thriving to achieve for decades. One of the stages of this advancement will be coding through natural human language instead of traditional programming language. On naturalness of computer programming D. Knuth said, “Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.”BIBREF0. Unfortunately, learning programming language is still necessary to instruct it. Researchers and developers are working to overcome this human-machine language barrier. Multiple branches exists to solve this challenge (i.e. inter-conversion of different programming language to have universally connected programming languages). Automatic code generation through natural language is not a new concept in computer science studies. However, it is difficult to create such tool due to these following three reasons–
Programming languages are diverse
An individual person expresses logical statements differently than other
Natural Language Processing (NLP) of programming statements is challenging since both human and programming language evolve over time
In this paper, a neural approach to translate pseudo-code or algorithm like human language expression into programming language code is proposed.
Problem Description
Code repositories (i.e. Git, SVN) flourished in the last decade producing big data of code allowing data scientists to perform machine learning on these data. In 2017, Allamanis M et al. published a survey in which they presented the state-of-the-art of the research areas where machine learning is changing the way programmers code during software engineering and development process BIBREF1. This paper discusses what are the restricting factors of developing such text-to-code conversion method and what problems need to be solved–
Problem Description ::: Programming Language Diversity
According to the sources, there are more than a thousand actively maintained programming languages, which signifies the diversity of these language . These languages were created to achieve different purpose and use different syntaxes. Low-level languages such as assembly languages are easier to express in human language because of the low or no abstraction at all whereas high-level, or Object-Oriented Programing (OOP) languages are more diversified in syntax and expression, which is challenging to bring into a unified human language structure. Nonetheless, portability and transparency between different programming languages also remains a challenge and an open research area. George D. et al. tried to overcome this problem through XML mapping BIBREF2. They tried to convert codes from C++ to Java using XML mapping as an intermediate language. However, the authors encountered challenges to support different features of both languages.
Problem Description ::: Human Language Factor
One of the motivations behind this paper is - as long as it is about programming, there is a finite and small set of expression which is used in human vocabulary. For instance, programmers express a for-loop in a very few specific ways BIBREF3. Variable declaration and value assignment expressions are also limited in nature. Although all codes are executable, human representation through text may not due to the semantic brittleness of code. Since high-level languages have a wide range of syntax, programmers use different linguistic expressions to explain those. For instance, small changes like swapping function arguments can significantly change the meaning of the code. Hence the challenge remains in processing human language to understand it properly which brings us to the next problem-
Problem Description ::: NLP of statements
Although there is a finite set of expressions for each programming statements, it is a challenge to extract information from the statements of the code accurately. Semantic analysis of linguistic expression plays an important role in this information extraction. For instance, in case of a loop, what is the initial value? What is the step value? When will the loop terminate?
Mihalcea R. et al. has achieved a variable success rate of 70-80% in producing code just from the problem statement expressed in human natural language BIBREF3. They focused solely on the detection of step and loops in their research. Another research group from MIT, Lei et al. use a semantic learning model for text to detect the inputs. The model produces a parser in C++ which can successfully parse more than 70% of the textual description of input BIBREF4. The test dataset and model was initially tested and targeted against ACM-ICPC participantsínputs which contains diverse and sometimes complex input instructions.
A recent survey from Allamanis M. et al. presented the state-of-the-art on the area of naturalness of programming BIBREF1. A number of research works have been conducted on text-to-code or code-to-text area in recent years. In 2015, Oda et al. proposed a way to translate each line of Python code into natural language pseudocode using Statistical Machine Learning Technique (SMT) framework BIBREF5 was used. This translation framework was able to - it can successfully translate the code to natural language pseudo coded text in both English and Japanese. In the same year, Chris Q. et al. mapped natural language with simple if-this-then-that logical rules BIBREF6. Tihomir G. and Viktor K. developed an Integrated Development Environment (IDE) integrated code assistant tool anyCode for Java which can search, import and call function just by typing desired functionality through text BIBREF7. They have used model and mapping framework between function signatures and utilized resources like WordNet, Java Corpus, relational mapping to process text online and offline.
Recently in 2017, P. Yin and G. Neubig proposed a semantic parser which generates code through its neural model BIBREF8. They formulated a grammatical model which works as a skeleton for neural network training. The grammatical rules are defined based on the various generalized structure of the statements in the programming language.
Proposed Methodology
The use of machine learning techniques such as SMT proved to be at most 75% successful in converting human text to executable code. BIBREF9. A programming language is just like a language with less vocabulary compared to a typical human language. For instance, the code vocabulary of the training dataset was 8814 (including variable, function, class names), whereas the English vocabulary to express the same code was 13659 in total. Here, programming language is considered just like another human language and widely used SMT techniques have been applied.
Proposed Methodology ::: Statistical Machine Translation
SMT techniques are widely used in Natural Language Processing (NLP). SMT plays a significant role in translation from one language to another, especially in lexical and grammatical rule extraction. In SMT, bilingual grammatical structures are automatically formed by statistical approaches instead of explicitly providing a grammatical model. This reduces months and years of work which requires significant collaboration between bi-lingual linguistics. Here, a neural network based machine translation model is used to translate regular text into programming code.
Proposed Methodology ::: Statistical Machine Translation ::: Data Preparation
SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language.
Proposed Methodology ::: Statistical Machine Translation ::: Vocabulary Generation
To train the neural model, the texts should be converted to a computational entity. To do that, two separate vocabulary files are created - one for the source texts and another for the code. Vocabulary generation is done by tokenization of words. Afterwards, the words are put into their contextual vector space using the popular word2vec BIBREF10 method to make the words computational.
Proposed Methodology ::: Statistical Machine Translation ::: Neural Model Training
In order to train the translation model between text-to-code an open source Neural Machine Translation (NMT) - OpenNMT implementation is utilized BIBREF11. PyTorch is used as Neural Network coding framework. For training, three types of Recurrent Neural Network (RNN) layers are used – an encoder layer, a decoder layer and an output layer. These layers together form a LSTM model. LSTM is typically used in seq2seq translation.
In Fig. FIGREF13, the neural model architecture is demonstrated. The diagram shows how it takes the source and target text as input and uses it for training. Vector representation of tokenized source and target text are fed into the model. Each token of the source text is passed into an encoder cell. Target text tokens are passed into a decoder cell. Encoder cells are part of the encoder RNN layer and decoder cells are part of the decoder RNN layer. End of the input sequence is marked by a $<$eos$>$ token. Upon getting the $<$eos$>$ token, the final cell state of encoder layer initiate the output layer sequence. At each target cell state, attention is applied with the encoder RNN state and combined with the current hidden state to produce the prediction of next target token. This predictions are then fed back to the target RNN. Attention mechanism helps us to overcome the fixed length restriction of encoder-decoder sequence and allows us to process variable length between input and output sequence. Attention uses encoder state and pass it to the decoder cell to give particular attention to the start of an output layer sequence. The encoder uses an initial state to tell the decoder what it is supposed to generate. Effectively, the decoder learns to generate target tokens, conditioned on the input sequence. Sigmoidal optimization is used to optimize the prediction.
Result Analysis
Training parallel corpus had 18805 lines of annotated code in it. The training model is executed several times with different training parameters. During the final training process, 500 validation data is used to generate the recurrent neural model, which is 3% of the training data. We run the training with epoch value of 10 with a batch size of 64. After finishing the training, the accuracy of the generated model using validation data from the source corpus was 74.40% (Fig. FIGREF17).
Although the generated code is incoherent and often predict wrong code token, this is expected because of the limited amount of training data. LSTM generally requires a more extensive set of data (100k+ in such scenario) to build a more accurate model. The incoherence can be resolved by incorporating coding syntax tree model in future. For instance–
"define the method tzname with 2 arguments: self and dt."
is translated into–
def __init__ ( self , regex ) :.
The translator is successfully generating the whole codeline automatically but missing the noun part (parameter and function name) part of the syntax.
Conclusion & Future Works
The main advantage of translating to a programming language is - it has a concrete and strict lexical and grammatical structure which human languages lack. The aim of this paper was to make the text-to-code framework work for general purpose programming language, primarily Python. In later phase, phrase-based word embedding can be incorporated for improved vocabulary mapping. To get more accurate target code for each line, Abstract Syntax Tree(AST) can be beneficial.
The contribution of this research is a machine learning model which can turn the human expression into coding expressions. This paper also discusses available methods which convert natural language to programming language successfully in fixed or tightly bounded linguistic paradigm. Approaching this problem using machine learning will give us the opportunity to explore the possibility of unified programming interface as well in the future.
Acknowledgment
We would like to thank Dr. Khandaker Tabin Hasan, Head of the Depertment of Computer Science, American International University-Bangladesh for his inspiration and encouragement in all of our research works. Also, thanks to Future Technology Conference - 2019 committee for partially supporting us to join the conference and one of our colleague - Faheem Abrar, Software Developer for his thorough review and comments on this research work and supporting us by providing fund. | No |
4196d329061f5a9d147e1e77aeed6a6bd9b35d18 | 4196d329061f5a9d147e1e77aeed6a6bd9b35d18_0 | Q: What is the architecture of the system?
Text: Introduction
Removing computer-human language barrier is an inevitable advancement researchers are thriving to achieve for decades. One of the stages of this advancement will be coding through natural human language instead of traditional programming language. On naturalness of computer programming D. Knuth said, “Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.”BIBREF0. Unfortunately, learning programming language is still necessary to instruct it. Researchers and developers are working to overcome this human-machine language barrier. Multiple branches exists to solve this challenge (i.e. inter-conversion of different programming language to have universally connected programming languages). Automatic code generation through natural language is not a new concept in computer science studies. However, it is difficult to create such tool due to these following three reasons–
Programming languages are diverse
An individual person expresses logical statements differently than other
Natural Language Processing (NLP) of programming statements is challenging since both human and programming language evolve over time
In this paper, a neural approach to translate pseudo-code or algorithm like human language expression into programming language code is proposed.
Problem Description
Code repositories (i.e. Git, SVN) flourished in the last decade producing big data of code allowing data scientists to perform machine learning on these data. In 2017, Allamanis M et al. published a survey in which they presented the state-of-the-art of the research areas where machine learning is changing the way programmers code during software engineering and development process BIBREF1. This paper discusses what are the restricting factors of developing such text-to-code conversion method and what problems need to be solved–
Problem Description ::: Programming Language Diversity
According to the sources, there are more than a thousand actively maintained programming languages, which signifies the diversity of these language . These languages were created to achieve different purpose and use different syntaxes. Low-level languages such as assembly languages are easier to express in human language because of the low or no abstraction at all whereas high-level, or Object-Oriented Programing (OOP) languages are more diversified in syntax and expression, which is challenging to bring into a unified human language structure. Nonetheless, portability and transparency between different programming languages also remains a challenge and an open research area. George D. et al. tried to overcome this problem through XML mapping BIBREF2. They tried to convert codes from C++ to Java using XML mapping as an intermediate language. However, the authors encountered challenges to support different features of both languages.
Problem Description ::: Human Language Factor
One of the motivations behind this paper is - as long as it is about programming, there is a finite and small set of expression which is used in human vocabulary. For instance, programmers express a for-loop in a very few specific ways BIBREF3. Variable declaration and value assignment expressions are also limited in nature. Although all codes are executable, human representation through text may not due to the semantic brittleness of code. Since high-level languages have a wide range of syntax, programmers use different linguistic expressions to explain those. For instance, small changes like swapping function arguments can significantly change the meaning of the code. Hence the challenge remains in processing human language to understand it properly which brings us to the next problem-
Problem Description ::: NLP of statements
Although there is a finite set of expressions for each programming statements, it is a challenge to extract information from the statements of the code accurately. Semantic analysis of linguistic expression plays an important role in this information extraction. For instance, in case of a loop, what is the initial value? What is the step value? When will the loop terminate?
Mihalcea R. et al. has achieved a variable success rate of 70-80% in producing code just from the problem statement expressed in human natural language BIBREF3. They focused solely on the detection of step and loops in their research. Another research group from MIT, Lei et al. use a semantic learning model for text to detect the inputs. The model produces a parser in C++ which can successfully parse more than 70% of the textual description of input BIBREF4. The test dataset and model was initially tested and targeted against ACM-ICPC participantsínputs which contains diverse and sometimes complex input instructions.
A recent survey from Allamanis M. et al. presented the state-of-the-art on the area of naturalness of programming BIBREF1. A number of research works have been conducted on text-to-code or code-to-text area in recent years. In 2015, Oda et al. proposed a way to translate each line of Python code into natural language pseudocode using Statistical Machine Learning Technique (SMT) framework BIBREF5 was used. This translation framework was able to - it can successfully translate the code to natural language pseudo coded text in both English and Japanese. In the same year, Chris Q. et al. mapped natural language with simple if-this-then-that logical rules BIBREF6. Tihomir G. and Viktor K. developed an Integrated Development Environment (IDE) integrated code assistant tool anyCode for Java which can search, import and call function just by typing desired functionality through text BIBREF7. They have used model and mapping framework between function signatures and utilized resources like WordNet, Java Corpus, relational mapping to process text online and offline.
Recently in 2017, P. Yin and G. Neubig proposed a semantic parser which generates code through its neural model BIBREF8. They formulated a grammatical model which works as a skeleton for neural network training. The grammatical rules are defined based on the various generalized structure of the statements in the programming language.
Proposed Methodology
The use of machine learning techniques such as SMT proved to be at most 75% successful in converting human text to executable code. BIBREF9. A programming language is just like a language with less vocabulary compared to a typical human language. For instance, the code vocabulary of the training dataset was 8814 (including variable, function, class names), whereas the English vocabulary to express the same code was 13659 in total. Here, programming language is considered just like another human language and widely used SMT techniques have been applied.
Proposed Methodology ::: Statistical Machine Translation
SMT techniques are widely used in Natural Language Processing (NLP). SMT plays a significant role in translation from one language to another, especially in lexical and grammatical rule extraction. In SMT, bilingual grammatical structures are automatically formed by statistical approaches instead of explicitly providing a grammatical model. This reduces months and years of work which requires significant collaboration between bi-lingual linguistics. Here, a neural network based machine translation model is used to translate regular text into programming code.
Proposed Methodology ::: Statistical Machine Translation ::: Data Preparation
SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language.
Proposed Methodology ::: Statistical Machine Translation ::: Vocabulary Generation
To train the neural model, the texts should be converted to a computational entity. To do that, two separate vocabulary files are created - one for the source texts and another for the code. Vocabulary generation is done by tokenization of words. Afterwards, the words are put into their contextual vector space using the popular word2vec BIBREF10 method to make the words computational.
Proposed Methodology ::: Statistical Machine Translation ::: Neural Model Training
In order to train the translation model between text-to-code an open source Neural Machine Translation (NMT) - OpenNMT implementation is utilized BIBREF11. PyTorch is used as Neural Network coding framework. For training, three types of Recurrent Neural Network (RNN) layers are used – an encoder layer, a decoder layer and an output layer. These layers together form a LSTM model. LSTM is typically used in seq2seq translation.
In Fig. FIGREF13, the neural model architecture is demonstrated. The diagram shows how it takes the source and target text as input and uses it for training. Vector representation of tokenized source and target text are fed into the model. Each token of the source text is passed into an encoder cell. Target text tokens are passed into a decoder cell. Encoder cells are part of the encoder RNN layer and decoder cells are part of the decoder RNN layer. End of the input sequence is marked by a $<$eos$>$ token. Upon getting the $<$eos$>$ token, the final cell state of encoder layer initiate the output layer sequence. At each target cell state, attention is applied with the encoder RNN state and combined with the current hidden state to produce the prediction of next target token. This predictions are then fed back to the target RNN. Attention mechanism helps us to overcome the fixed length restriction of encoder-decoder sequence and allows us to process variable length between input and output sequence. Attention uses encoder state and pass it to the decoder cell to give particular attention to the start of an output layer sequence. The encoder uses an initial state to tell the decoder what it is supposed to generate. Effectively, the decoder learns to generate target tokens, conditioned on the input sequence. Sigmoidal optimization is used to optimize the prediction.
Result Analysis
Training parallel corpus had 18805 lines of annotated code in it. The training model is executed several times with different training parameters. During the final training process, 500 validation data is used to generate the recurrent neural model, which is 3% of the training data. We run the training with epoch value of 10 with a batch size of 64. After finishing the training, the accuracy of the generated model using validation data from the source corpus was 74.40% (Fig. FIGREF17).
Although the generated code is incoherent and often predict wrong code token, this is expected because of the limited amount of training data. LSTM generally requires a more extensive set of data (100k+ in such scenario) to build a more accurate model. The incoherence can be resolved by incorporating coding syntax tree model in future. For instance–
"define the method tzname with 2 arguments: self and dt."
is translated into–
def __init__ ( self , regex ) :.
The translator is successfully generating the whole codeline automatically but missing the noun part (parameter and function name) part of the syntax.
Conclusion & Future Works
The main advantage of translating to a programming language is - it has a concrete and strict lexical and grammatical structure which human languages lack. The aim of this paper was to make the text-to-code framework work for general purpose programming language, primarily Python. In later phase, phrase-based word embedding can be incorporated for improved vocabulary mapping. To get more accurate target code for each line, Abstract Syntax Tree(AST) can be beneficial.
The contribution of this research is a machine learning model which can turn the human expression into coding expressions. This paper also discusses available methods which convert natural language to programming language successfully in fixed or tightly bounded linguistic paradigm. Approaching this problem using machine learning will give us the opportunity to explore the possibility of unified programming interface as well in the future.
Acknowledgment
We would like to thank Dr. Khandaker Tabin Hasan, Head of the Depertment of Computer Science, American International University-Bangladesh for his inspiration and encouragement in all of our research works. Also, thanks to Future Technology Conference - 2019 committee for partially supporting us to join the conference and one of our colleague - Faheem Abrar, Software Developer for his thorough review and comments on this research work and supporting us by providing fund. | seq2seq translation |
a37e4a21ba98b0259c36deca0d298194fa611d2f | a37e4a21ba98b0259c36deca0d298194fa611d2f_0 | Q: How long are expressions in layman's language?
Text: Introduction
Removing computer-human language barrier is an inevitable advancement researchers are thriving to achieve for decades. One of the stages of this advancement will be coding through natural human language instead of traditional programming language. On naturalness of computer programming D. Knuth said, “Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.”BIBREF0. Unfortunately, learning programming language is still necessary to instruct it. Researchers and developers are working to overcome this human-machine language barrier. Multiple branches exists to solve this challenge (i.e. inter-conversion of different programming language to have universally connected programming languages). Automatic code generation through natural language is not a new concept in computer science studies. However, it is difficult to create such tool due to these following three reasons–
Programming languages are diverse
An individual person expresses logical statements differently than other
Natural Language Processing (NLP) of programming statements is challenging since both human and programming language evolve over time
In this paper, a neural approach to translate pseudo-code or algorithm like human language expression into programming language code is proposed.
Problem Description
Code repositories (i.e. Git, SVN) flourished in the last decade producing big data of code allowing data scientists to perform machine learning on these data. In 2017, Allamanis M et al. published a survey in which they presented the state-of-the-art of the research areas where machine learning is changing the way programmers code during software engineering and development process BIBREF1. This paper discusses what are the restricting factors of developing such text-to-code conversion method and what problems need to be solved–
Problem Description ::: Programming Language Diversity
According to the sources, there are more than a thousand actively maintained programming languages, which signifies the diversity of these language . These languages were created to achieve different purpose and use different syntaxes. Low-level languages such as assembly languages are easier to express in human language because of the low or no abstraction at all whereas high-level, or Object-Oriented Programing (OOP) languages are more diversified in syntax and expression, which is challenging to bring into a unified human language structure. Nonetheless, portability and transparency between different programming languages also remains a challenge and an open research area. George D. et al. tried to overcome this problem through XML mapping BIBREF2. They tried to convert codes from C++ to Java using XML mapping as an intermediate language. However, the authors encountered challenges to support different features of both languages.
Problem Description ::: Human Language Factor
One of the motivations behind this paper is - as long as it is about programming, there is a finite and small set of expression which is used in human vocabulary. For instance, programmers express a for-loop in a very few specific ways BIBREF3. Variable declaration and value assignment expressions are also limited in nature. Although all codes are executable, human representation through text may not due to the semantic brittleness of code. Since high-level languages have a wide range of syntax, programmers use different linguistic expressions to explain those. For instance, small changes like swapping function arguments can significantly change the meaning of the code. Hence the challenge remains in processing human language to understand it properly which brings us to the next problem-
Problem Description ::: NLP of statements
Although there is a finite set of expressions for each programming statements, it is a challenge to extract information from the statements of the code accurately. Semantic analysis of linguistic expression plays an important role in this information extraction. For instance, in case of a loop, what is the initial value? What is the step value? When will the loop terminate?
Mihalcea R. et al. has achieved a variable success rate of 70-80% in producing code just from the problem statement expressed in human natural language BIBREF3. They focused solely on the detection of step and loops in their research. Another research group from MIT, Lei et al. use a semantic learning model for text to detect the inputs. The model produces a parser in C++ which can successfully parse more than 70% of the textual description of input BIBREF4. The test dataset and model was initially tested and targeted against ACM-ICPC participantsínputs which contains diverse and sometimes complex input instructions.
A recent survey from Allamanis M. et al. presented the state-of-the-art on the area of naturalness of programming BIBREF1. A number of research works have been conducted on text-to-code or code-to-text area in recent years. In 2015, Oda et al. proposed a way to translate each line of Python code into natural language pseudocode using Statistical Machine Learning Technique (SMT) framework BIBREF5 was used. This translation framework was able to - it can successfully translate the code to natural language pseudo coded text in both English and Japanese. In the same year, Chris Q. et al. mapped natural language with simple if-this-then-that logical rules BIBREF6. Tihomir G. and Viktor K. developed an Integrated Development Environment (IDE) integrated code assistant tool anyCode for Java which can search, import and call function just by typing desired functionality through text BIBREF7. They have used model and mapping framework between function signatures and utilized resources like WordNet, Java Corpus, relational mapping to process text online and offline.
Recently in 2017, P. Yin and G. Neubig proposed a semantic parser which generates code through its neural model BIBREF8. They formulated a grammatical model which works as a skeleton for neural network training. The grammatical rules are defined based on the various generalized structure of the statements in the programming language.
Proposed Methodology
The use of machine learning techniques such as SMT proved to be at most 75% successful in converting human text to executable code. BIBREF9. A programming language is just like a language with less vocabulary compared to a typical human language. For instance, the code vocabulary of the training dataset was 8814 (including variable, function, class names), whereas the English vocabulary to express the same code was 13659 in total. Here, programming language is considered just like another human language and widely used SMT techniques have been applied.
Proposed Methodology ::: Statistical Machine Translation
SMT techniques are widely used in Natural Language Processing (NLP). SMT plays a significant role in translation from one language to another, especially in lexical and grammatical rule extraction. In SMT, bilingual grammatical structures are automatically formed by statistical approaches instead of explicitly providing a grammatical model. This reduces months and years of work which requires significant collaboration between bi-lingual linguistics. Here, a neural network based machine translation model is used to translate regular text into programming code.
Proposed Methodology ::: Statistical Machine Translation ::: Data Preparation
SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language.
Proposed Methodology ::: Statistical Machine Translation ::: Vocabulary Generation
To train the neural model, the texts should be converted to a computational entity. To do that, two separate vocabulary files are created - one for the source texts and another for the code. Vocabulary generation is done by tokenization of words. Afterwards, the words are put into their contextual vector space using the popular word2vec BIBREF10 method to make the words computational.
Proposed Methodology ::: Statistical Machine Translation ::: Neural Model Training
In order to train the translation model between text-to-code an open source Neural Machine Translation (NMT) - OpenNMT implementation is utilized BIBREF11. PyTorch is used as Neural Network coding framework. For training, three types of Recurrent Neural Network (RNN) layers are used – an encoder layer, a decoder layer and an output layer. These layers together form a LSTM model. LSTM is typically used in seq2seq translation.
In Fig. FIGREF13, the neural model architecture is demonstrated. The diagram shows how it takes the source and target text as input and uses it for training. Vector representation of tokenized source and target text are fed into the model. Each token of the source text is passed into an encoder cell. Target text tokens are passed into a decoder cell. Encoder cells are part of the encoder RNN layer and decoder cells are part of the decoder RNN layer. End of the input sequence is marked by a $<$eos$>$ token. Upon getting the $<$eos$>$ token, the final cell state of encoder layer initiate the output layer sequence. At each target cell state, attention is applied with the encoder RNN state and combined with the current hidden state to produce the prediction of next target token. This predictions are then fed back to the target RNN. Attention mechanism helps us to overcome the fixed length restriction of encoder-decoder sequence and allows us to process variable length between input and output sequence. Attention uses encoder state and pass it to the decoder cell to give particular attention to the start of an output layer sequence. The encoder uses an initial state to tell the decoder what it is supposed to generate. Effectively, the decoder learns to generate target tokens, conditioned on the input sequence. Sigmoidal optimization is used to optimize the prediction.
Result Analysis
Training parallel corpus had 18805 lines of annotated code in it. The training model is executed several times with different training parameters. During the final training process, 500 validation data is used to generate the recurrent neural model, which is 3% of the training data. We run the training with epoch value of 10 with a batch size of 64. After finishing the training, the accuracy of the generated model using validation data from the source corpus was 74.40% (Fig. FIGREF17).
Although the generated code is incoherent and often predict wrong code token, this is expected because of the limited amount of training data. LSTM generally requires a more extensive set of data (100k+ in such scenario) to build a more accurate model. The incoherence can be resolved by incorporating coding syntax tree model in future. For instance–
"define the method tzname with 2 arguments: self and dt."
is translated into–
def __init__ ( self , regex ) :.
The translator is successfully generating the whole codeline automatically but missing the noun part (parameter and function name) part of the syntax.
Conclusion & Future Works
The main advantage of translating to a programming language is - it has a concrete and strict lexical and grammatical structure which human languages lack. The aim of this paper was to make the text-to-code framework work for general purpose programming language, primarily Python. In later phase, phrase-based word embedding can be incorporated for improved vocabulary mapping. To get more accurate target code for each line, Abstract Syntax Tree(AST) can be beneficial.
The contribution of this research is a machine learning model which can turn the human expression into coding expressions. This paper also discusses available methods which convert natural language to programming language successfully in fixed or tightly bounded linguistic paradigm. Approaching this problem using machine learning will give us the opportunity to explore the possibility of unified programming interface as well in the future.
Acknowledgment
We would like to thank Dr. Khandaker Tabin Hasan, Head of the Depertment of Computer Science, American International University-Bangladesh for his inspiration and encouragement in all of our research works. Also, thanks to Future Technology Conference - 2019 committee for partially supporting us to join the conference and one of our colleague - Faheem Abrar, Software Developer for his thorough review and comments on this research work and supporting us by providing fund. | Unanswerable |
321429282557e79061fe2fe02a9467f3d0118cdd | 321429282557e79061fe2fe02a9467f3d0118cdd_0 | Q: What additional techniques could be incorporated to further improve accuracy?
Text: Introduction
Removing computer-human language barrier is an inevitable advancement researchers are thriving to achieve for decades. One of the stages of this advancement will be coding through natural human language instead of traditional programming language. On naturalness of computer programming D. Knuth said, “Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.”BIBREF0. Unfortunately, learning programming language is still necessary to instruct it. Researchers and developers are working to overcome this human-machine language barrier. Multiple branches exists to solve this challenge (i.e. inter-conversion of different programming language to have universally connected programming languages). Automatic code generation through natural language is not a new concept in computer science studies. However, it is difficult to create such tool due to these following three reasons–
Programming languages are diverse
An individual person expresses logical statements differently than other
Natural Language Processing (NLP) of programming statements is challenging since both human and programming language evolve over time
In this paper, a neural approach to translate pseudo-code or algorithm like human language expression into programming language code is proposed.
Problem Description
Code repositories (i.e. Git, SVN) flourished in the last decade producing big data of code allowing data scientists to perform machine learning on these data. In 2017, Allamanis M et al. published a survey in which they presented the state-of-the-art of the research areas where machine learning is changing the way programmers code during software engineering and development process BIBREF1. This paper discusses what are the restricting factors of developing such text-to-code conversion method and what problems need to be solved–
Problem Description ::: Programming Language Diversity
According to the sources, there are more than a thousand actively maintained programming languages, which signifies the diversity of these language . These languages were created to achieve different purpose and use different syntaxes. Low-level languages such as assembly languages are easier to express in human language because of the low or no abstraction at all whereas high-level, or Object-Oriented Programing (OOP) languages are more diversified in syntax and expression, which is challenging to bring into a unified human language structure. Nonetheless, portability and transparency between different programming languages also remains a challenge and an open research area. George D. et al. tried to overcome this problem through XML mapping BIBREF2. They tried to convert codes from C++ to Java using XML mapping as an intermediate language. However, the authors encountered challenges to support different features of both languages.
Problem Description ::: Human Language Factor
One of the motivations behind this paper is - as long as it is about programming, there is a finite and small set of expression which is used in human vocabulary. For instance, programmers express a for-loop in a very few specific ways BIBREF3. Variable declaration and value assignment expressions are also limited in nature. Although all codes are executable, human representation through text may not due to the semantic brittleness of code. Since high-level languages have a wide range of syntax, programmers use different linguistic expressions to explain those. For instance, small changes like swapping function arguments can significantly change the meaning of the code. Hence the challenge remains in processing human language to understand it properly which brings us to the next problem-
Problem Description ::: NLP of statements
Although there is a finite set of expressions for each programming statements, it is a challenge to extract information from the statements of the code accurately. Semantic analysis of linguistic expression plays an important role in this information extraction. For instance, in case of a loop, what is the initial value? What is the step value? When will the loop terminate?
Mihalcea R. et al. has achieved a variable success rate of 70-80% in producing code just from the problem statement expressed in human natural language BIBREF3. They focused solely on the detection of step and loops in their research. Another research group from MIT, Lei et al. use a semantic learning model for text to detect the inputs. The model produces a parser in C++ which can successfully parse more than 70% of the textual description of input BIBREF4. The test dataset and model was initially tested and targeted against ACM-ICPC participantsínputs which contains diverse and sometimes complex input instructions.
A recent survey from Allamanis M. et al. presented the state-of-the-art on the area of naturalness of programming BIBREF1. A number of research works have been conducted on text-to-code or code-to-text area in recent years. In 2015, Oda et al. proposed a way to translate each line of Python code into natural language pseudocode using Statistical Machine Learning Technique (SMT) framework BIBREF5 was used. This translation framework was able to - it can successfully translate the code to natural language pseudo coded text in both English and Japanese. In the same year, Chris Q. et al. mapped natural language with simple if-this-then-that logical rules BIBREF6. Tihomir G. and Viktor K. developed an Integrated Development Environment (IDE) integrated code assistant tool anyCode for Java which can search, import and call function just by typing desired functionality through text BIBREF7. They have used model and mapping framework between function signatures and utilized resources like WordNet, Java Corpus, relational mapping to process text online and offline.
Recently in 2017, P. Yin and G. Neubig proposed a semantic parser which generates code through its neural model BIBREF8. They formulated a grammatical model which works as a skeleton for neural network training. The grammatical rules are defined based on the various generalized structure of the statements in the programming language.
Proposed Methodology
The use of machine learning techniques such as SMT proved to be at most 75% successful in converting human text to executable code. BIBREF9. A programming language is just like a language with less vocabulary compared to a typical human language. For instance, the code vocabulary of the training dataset was 8814 (including variable, function, class names), whereas the English vocabulary to express the same code was 13659 in total. Here, programming language is considered just like another human language and widely used SMT techniques have been applied.
Proposed Methodology ::: Statistical Machine Translation
SMT techniques are widely used in Natural Language Processing (NLP). SMT plays a significant role in translation from one language to another, especially in lexical and grammatical rule extraction. In SMT, bilingual grammatical structures are automatically formed by statistical approaches instead of explicitly providing a grammatical model. This reduces months and years of work which requires significant collaboration between bi-lingual linguistics. Here, a neural network based machine translation model is used to translate regular text into programming code.
Proposed Methodology ::: Statistical Machine Translation ::: Data Preparation
SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language.
Proposed Methodology ::: Statistical Machine Translation ::: Vocabulary Generation
To train the neural model, the texts should be converted to a computational entity. To do that, two separate vocabulary files are created - one for the source texts and another for the code. Vocabulary generation is done by tokenization of words. Afterwards, the words are put into their contextual vector space using the popular word2vec BIBREF10 method to make the words computational.
Proposed Methodology ::: Statistical Machine Translation ::: Neural Model Training
In order to train the translation model between text-to-code an open source Neural Machine Translation (NMT) - OpenNMT implementation is utilized BIBREF11. PyTorch is used as Neural Network coding framework. For training, three types of Recurrent Neural Network (RNN) layers are used – an encoder layer, a decoder layer and an output layer. These layers together form a LSTM model. LSTM is typically used in seq2seq translation.
In Fig. FIGREF13, the neural model architecture is demonstrated. The diagram shows how it takes the source and target text as input and uses it for training. Vector representation of tokenized source and target text are fed into the model. Each token of the source text is passed into an encoder cell. Target text tokens are passed into a decoder cell. Encoder cells are part of the encoder RNN layer and decoder cells are part of the decoder RNN layer. End of the input sequence is marked by a $<$eos$>$ token. Upon getting the $<$eos$>$ token, the final cell state of encoder layer initiate the output layer sequence. At each target cell state, attention is applied with the encoder RNN state and combined with the current hidden state to produce the prediction of next target token. This predictions are then fed back to the target RNN. Attention mechanism helps us to overcome the fixed length restriction of encoder-decoder sequence and allows us to process variable length between input and output sequence. Attention uses encoder state and pass it to the decoder cell to give particular attention to the start of an output layer sequence. The encoder uses an initial state to tell the decoder what it is supposed to generate. Effectively, the decoder learns to generate target tokens, conditioned on the input sequence. Sigmoidal optimization is used to optimize the prediction.
Result Analysis
Training parallel corpus had 18805 lines of annotated code in it. The training model is executed several times with different training parameters. During the final training process, 500 validation data is used to generate the recurrent neural model, which is 3% of the training data. We run the training with epoch value of 10 with a batch size of 64. After finishing the training, the accuracy of the generated model using validation data from the source corpus was 74.40% (Fig. FIGREF17).
Although the generated code is incoherent and often predict wrong code token, this is expected because of the limited amount of training data. LSTM generally requires a more extensive set of data (100k+ in such scenario) to build a more accurate model. The incoherence can be resolved by incorporating coding syntax tree model in future. For instance–
"define the method tzname with 2 arguments: self and dt."
is translated into–
def __init__ ( self , regex ) :.
The translator is successfully generating the whole codeline automatically but missing the noun part (parameter and function name) part of the syntax.
Conclusion & Future Works
The main advantage of translating to a programming language is - it has a concrete and strict lexical and grammatical structure which human languages lack. The aim of this paper was to make the text-to-code framework work for general purpose programming language, primarily Python. In later phase, phrase-based word embedding can be incorporated for improved vocabulary mapping. To get more accurate target code for each line, Abstract Syntax Tree(AST) can be beneficial.
The contribution of this research is a machine learning model which can turn the human expression into coding expressions. This paper also discusses available methods which convert natural language to programming language successfully in fixed or tightly bounded linguistic paradigm. Approaching this problem using machine learning will give us the opportunity to explore the possibility of unified programming interface as well in the future.
Acknowledgment
We would like to thank Dr. Khandaker Tabin Hasan, Head of the Depertment of Computer Science, American International University-Bangladesh for his inspiration and encouragement in all of our research works. Also, thanks to Future Technology Conference - 2019 committee for partially supporting us to join the conference and one of our colleague - Faheem Abrar, Software Developer for his thorough review and comments on this research work and supporting us by providing fund. | phrase-based word embedding, Abstract Syntax Tree(AST) |
891cab2e41d6ba962778bda297592c916b432226 | 891cab2e41d6ba962778bda297592c916b432226_0 | Q: What programming language is target language?
Text: Introduction
Removing computer-human language barrier is an inevitable advancement researchers are thriving to achieve for decades. One of the stages of this advancement will be coding through natural human language instead of traditional programming language. On naturalness of computer programming D. Knuth said, “Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.”BIBREF0. Unfortunately, learning programming language is still necessary to instruct it. Researchers and developers are working to overcome this human-machine language barrier. Multiple branches exists to solve this challenge (i.e. inter-conversion of different programming language to have universally connected programming languages). Automatic code generation through natural language is not a new concept in computer science studies. However, it is difficult to create such tool due to these following three reasons–
Programming languages are diverse
An individual person expresses logical statements differently than other
Natural Language Processing (NLP) of programming statements is challenging since both human and programming language evolve over time
In this paper, a neural approach to translate pseudo-code or algorithm like human language expression into programming language code is proposed.
Problem Description
Code repositories (i.e. Git, SVN) flourished in the last decade producing big data of code allowing data scientists to perform machine learning on these data. In 2017, Allamanis M et al. published a survey in which they presented the state-of-the-art of the research areas where machine learning is changing the way programmers code during software engineering and development process BIBREF1. This paper discusses what are the restricting factors of developing such text-to-code conversion method and what problems need to be solved–
Problem Description ::: Programming Language Diversity
According to the sources, there are more than a thousand actively maintained programming languages, which signifies the diversity of these language . These languages were created to achieve different purpose and use different syntaxes. Low-level languages such as assembly languages are easier to express in human language because of the low or no abstraction at all whereas high-level, or Object-Oriented Programing (OOP) languages are more diversified in syntax and expression, which is challenging to bring into a unified human language structure. Nonetheless, portability and transparency between different programming languages also remains a challenge and an open research area. George D. et al. tried to overcome this problem through XML mapping BIBREF2. They tried to convert codes from C++ to Java using XML mapping as an intermediate language. However, the authors encountered challenges to support different features of both languages.
Problem Description ::: Human Language Factor
One of the motivations behind this paper is - as long as it is about programming, there is a finite and small set of expression which is used in human vocabulary. For instance, programmers express a for-loop in a very few specific ways BIBREF3. Variable declaration and value assignment expressions are also limited in nature. Although all codes are executable, human representation through text may not due to the semantic brittleness of code. Since high-level languages have a wide range of syntax, programmers use different linguistic expressions to explain those. For instance, small changes like swapping function arguments can significantly change the meaning of the code. Hence the challenge remains in processing human language to understand it properly which brings us to the next problem-
Problem Description ::: NLP of statements
Although there is a finite set of expressions for each programming statements, it is a challenge to extract information from the statements of the code accurately. Semantic analysis of linguistic expression plays an important role in this information extraction. For instance, in case of a loop, what is the initial value? What is the step value? When will the loop terminate?
Mihalcea R. et al. has achieved a variable success rate of 70-80% in producing code just from the problem statement expressed in human natural language BIBREF3. They focused solely on the detection of step and loops in their research. Another research group from MIT, Lei et al. use a semantic learning model for text to detect the inputs. The model produces a parser in C++ which can successfully parse more than 70% of the textual description of input BIBREF4. The test dataset and model was initially tested and targeted against ACM-ICPC participantsínputs which contains diverse and sometimes complex input instructions.
A recent survey from Allamanis M. et al. presented the state-of-the-art on the area of naturalness of programming BIBREF1. A number of research works have been conducted on text-to-code or code-to-text area in recent years. In 2015, Oda et al. proposed a way to translate each line of Python code into natural language pseudocode using Statistical Machine Learning Technique (SMT) framework BIBREF5 was used. This translation framework was able to - it can successfully translate the code to natural language pseudo coded text in both English and Japanese. In the same year, Chris Q. et al. mapped natural language with simple if-this-then-that logical rules BIBREF6. Tihomir G. and Viktor K. developed an Integrated Development Environment (IDE) integrated code assistant tool anyCode for Java which can search, import and call function just by typing desired functionality through text BIBREF7. They have used model and mapping framework between function signatures and utilized resources like WordNet, Java Corpus, relational mapping to process text online and offline.
Recently in 2017, P. Yin and G. Neubig proposed a semantic parser which generates code through its neural model BIBREF8. They formulated a grammatical model which works as a skeleton for neural network training. The grammatical rules are defined based on the various generalized structure of the statements in the programming language.
Proposed Methodology
The use of machine learning techniques such as SMT proved to be at most 75% successful in converting human text to executable code. BIBREF9. A programming language is just like a language with less vocabulary compared to a typical human language. For instance, the code vocabulary of the training dataset was 8814 (including variable, function, class names), whereas the English vocabulary to express the same code was 13659 in total. Here, programming language is considered just like another human language and widely used SMT techniques have been applied.
Proposed Methodology ::: Statistical Machine Translation
SMT techniques are widely used in Natural Language Processing (NLP). SMT plays a significant role in translation from one language to another, especially in lexical and grammatical rule extraction. In SMT, bilingual grammatical structures are automatically formed by statistical approaches instead of explicitly providing a grammatical model. This reduces months and years of work which requires significant collaboration between bi-lingual linguistics. Here, a neural network based machine translation model is used to translate regular text into programming code.
Proposed Methodology ::: Statistical Machine Translation ::: Data Preparation
SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language.
Proposed Methodology ::: Statistical Machine Translation ::: Vocabulary Generation
To train the neural model, the texts should be converted to a computational entity. To do that, two separate vocabulary files are created - one for the source texts and another for the code. Vocabulary generation is done by tokenization of words. Afterwards, the words are put into their contextual vector space using the popular word2vec BIBREF10 method to make the words computational.
Proposed Methodology ::: Statistical Machine Translation ::: Neural Model Training
In order to train the translation model between text-to-code an open source Neural Machine Translation (NMT) - OpenNMT implementation is utilized BIBREF11. PyTorch is used as Neural Network coding framework. For training, three types of Recurrent Neural Network (RNN) layers are used – an encoder layer, a decoder layer and an output layer. These layers together form a LSTM model. LSTM is typically used in seq2seq translation.
In Fig. FIGREF13, the neural model architecture is demonstrated. The diagram shows how it takes the source and target text as input and uses it for training. Vector representation of tokenized source and target text are fed into the model. Each token of the source text is passed into an encoder cell. Target text tokens are passed into a decoder cell. Encoder cells are part of the encoder RNN layer and decoder cells are part of the decoder RNN layer. End of the input sequence is marked by a $<$eos$>$ token. Upon getting the $<$eos$>$ token, the final cell state of encoder layer initiate the output layer sequence. At each target cell state, attention is applied with the encoder RNN state and combined with the current hidden state to produce the prediction of next target token. This predictions are then fed back to the target RNN. Attention mechanism helps us to overcome the fixed length restriction of encoder-decoder sequence and allows us to process variable length between input and output sequence. Attention uses encoder state and pass it to the decoder cell to give particular attention to the start of an output layer sequence. The encoder uses an initial state to tell the decoder what it is supposed to generate. Effectively, the decoder learns to generate target tokens, conditioned on the input sequence. Sigmoidal optimization is used to optimize the prediction.
Result Analysis
Training parallel corpus had 18805 lines of annotated code in it. The training model is executed several times with different training parameters. During the final training process, 500 validation data is used to generate the recurrent neural model, which is 3% of the training data. We run the training with epoch value of 10 with a batch size of 64. After finishing the training, the accuracy of the generated model using validation data from the source corpus was 74.40% (Fig. FIGREF17).
Although the generated code is incoherent and often predict wrong code token, this is expected because of the limited amount of training data. LSTM generally requires a more extensive set of data (100k+ in such scenario) to build a more accurate model. The incoherence can be resolved by incorporating coding syntax tree model in future. For instance–
"define the method tzname with 2 arguments: self and dt."
is translated into–
def __init__ ( self , regex ) :.
The translator is successfully generating the whole codeline automatically but missing the noun part (parameter and function name) part of the syntax.
Conclusion & Future Works
The main advantage of translating to a programming language is - it has a concrete and strict lexical and grammatical structure which human languages lack. The aim of this paper was to make the text-to-code framework work for general purpose programming language, primarily Python. In later phase, phrase-based word embedding can be incorporated for improved vocabulary mapping. To get more accurate target code for each line, Abstract Syntax Tree(AST) can be beneficial.
The contribution of this research is a machine learning model which can turn the human expression into coding expressions. This paper also discusses available methods which convert natural language to programming language successfully in fixed or tightly bounded linguistic paradigm. Approaching this problem using machine learning will give us the opportunity to explore the possibility of unified programming interface as well in the future.
Acknowledgment
We would like to thank Dr. Khandaker Tabin Hasan, Head of the Depertment of Computer Science, American International University-Bangladesh for his inspiration and encouragement in all of our research works. Also, thanks to Future Technology Conference - 2019 committee for partially supporting us to join the conference and one of our colleague - Faheem Abrar, Software Developer for his thorough review and comments on this research work and supporting us by providing fund. | Python |
1eeabfde99594b8d9c6a007f50b97f7f527b0a17 | 1eeabfde99594b8d9c6a007f50b97f7f527b0a17_0 | Q: What dataset is used to measure accuracy?
Text: Introduction
Removing computer-human language barrier is an inevitable advancement researchers are thriving to achieve for decades. One of the stages of this advancement will be coding through natural human language instead of traditional programming language. On naturalness of computer programming D. Knuth said, “Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.”BIBREF0. Unfortunately, learning programming language is still necessary to instruct it. Researchers and developers are working to overcome this human-machine language barrier. Multiple branches exists to solve this challenge (i.e. inter-conversion of different programming language to have universally connected programming languages). Automatic code generation through natural language is not a new concept in computer science studies. However, it is difficult to create such tool due to these following three reasons–
Programming languages are diverse
An individual person expresses logical statements differently than other
Natural Language Processing (NLP) of programming statements is challenging since both human and programming language evolve over time
In this paper, a neural approach to translate pseudo-code or algorithm like human language expression into programming language code is proposed.
Problem Description
Code repositories (i.e. Git, SVN) flourished in the last decade producing big data of code allowing data scientists to perform machine learning on these data. In 2017, Allamanis M et al. published a survey in which they presented the state-of-the-art of the research areas where machine learning is changing the way programmers code during software engineering and development process BIBREF1. This paper discusses what are the restricting factors of developing such text-to-code conversion method and what problems need to be solved–
Problem Description ::: Programming Language Diversity
According to the sources, there are more than a thousand actively maintained programming languages, which signifies the diversity of these language . These languages were created to achieve different purpose and use different syntaxes. Low-level languages such as assembly languages are easier to express in human language because of the low or no abstraction at all whereas high-level, or Object-Oriented Programing (OOP) languages are more diversified in syntax and expression, which is challenging to bring into a unified human language structure. Nonetheless, portability and transparency between different programming languages also remains a challenge and an open research area. George D. et al. tried to overcome this problem through XML mapping BIBREF2. They tried to convert codes from C++ to Java using XML mapping as an intermediate language. However, the authors encountered challenges to support different features of both languages.
Problem Description ::: Human Language Factor
One of the motivations behind this paper is - as long as it is about programming, there is a finite and small set of expression which is used in human vocabulary. For instance, programmers express a for-loop in a very few specific ways BIBREF3. Variable declaration and value assignment expressions are also limited in nature. Although all codes are executable, human representation through text may not due to the semantic brittleness of code. Since high-level languages have a wide range of syntax, programmers use different linguistic expressions to explain those. For instance, small changes like swapping function arguments can significantly change the meaning of the code. Hence the challenge remains in processing human language to understand it properly which brings us to the next problem-
Problem Description ::: NLP of statements
Although there is a finite set of expressions for each programming statements, it is a challenge to extract information from the statements of the code accurately. Semantic analysis of linguistic expression plays an important role in this information extraction. For instance, in case of a loop, what is the initial value? What is the step value? When will the loop terminate?
Mihalcea R. et al. has achieved a variable success rate of 70-80% in producing code just from the problem statement expressed in human natural language BIBREF3. They focused solely on the detection of step and loops in their research. Another research group from MIT, Lei et al. use a semantic learning model for text to detect the inputs. The model produces a parser in C++ which can successfully parse more than 70% of the textual description of input BIBREF4. The test dataset and model was initially tested and targeted against ACM-ICPC participantsínputs which contains diverse and sometimes complex input instructions.
A recent survey from Allamanis M. et al. presented the state-of-the-art on the area of naturalness of programming BIBREF1. A number of research works have been conducted on text-to-code or code-to-text area in recent years. In 2015, Oda et al. proposed a way to translate each line of Python code into natural language pseudocode using Statistical Machine Learning Technique (SMT) framework BIBREF5 was used. This translation framework was able to - it can successfully translate the code to natural language pseudo coded text in both English and Japanese. In the same year, Chris Q. et al. mapped natural language with simple if-this-then-that logical rules BIBREF6. Tihomir G. and Viktor K. developed an Integrated Development Environment (IDE) integrated code assistant tool anyCode for Java which can search, import and call function just by typing desired functionality through text BIBREF7. They have used model and mapping framework between function signatures and utilized resources like WordNet, Java Corpus, relational mapping to process text online and offline.
Recently in 2017, P. Yin and G. Neubig proposed a semantic parser which generates code through its neural model BIBREF8. They formulated a grammatical model which works as a skeleton for neural network training. The grammatical rules are defined based on the various generalized structure of the statements in the programming language.
Proposed Methodology
The use of machine learning techniques such as SMT proved to be at most 75% successful in converting human text to executable code. BIBREF9. A programming language is just like a language with less vocabulary compared to a typical human language. For instance, the code vocabulary of the training dataset was 8814 (including variable, function, class names), whereas the English vocabulary to express the same code was 13659 in total. Here, programming language is considered just like another human language and widely used SMT techniques have been applied.
Proposed Methodology ::: Statistical Machine Translation
SMT techniques are widely used in Natural Language Processing (NLP). SMT plays a significant role in translation from one language to another, especially in lexical and grammatical rule extraction. In SMT, bilingual grammatical structures are automatically formed by statistical approaches instead of explicitly providing a grammatical model. This reduces months and years of work which requires significant collaboration between bi-lingual linguistics. Here, a neural network based machine translation model is used to translate regular text into programming code.
Proposed Methodology ::: Statistical Machine Translation ::: Data Preparation
SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language.
Proposed Methodology ::: Statistical Machine Translation ::: Vocabulary Generation
To train the neural model, the texts should be converted to a computational entity. To do that, two separate vocabulary files are created - one for the source texts and another for the code. Vocabulary generation is done by tokenization of words. Afterwards, the words are put into their contextual vector space using the popular word2vec BIBREF10 method to make the words computational.
Proposed Methodology ::: Statistical Machine Translation ::: Neural Model Training
In order to train the translation model between text-to-code an open source Neural Machine Translation (NMT) - OpenNMT implementation is utilized BIBREF11. PyTorch is used as Neural Network coding framework. For training, three types of Recurrent Neural Network (RNN) layers are used – an encoder layer, a decoder layer and an output layer. These layers together form a LSTM model. LSTM is typically used in seq2seq translation.
In Fig. FIGREF13, the neural model architecture is demonstrated. The diagram shows how it takes the source and target text as input and uses it for training. Vector representation of tokenized source and target text are fed into the model. Each token of the source text is passed into an encoder cell. Target text tokens are passed into a decoder cell. Encoder cells are part of the encoder RNN layer and decoder cells are part of the decoder RNN layer. End of the input sequence is marked by a $<$eos$>$ token. Upon getting the $<$eos$>$ token, the final cell state of encoder layer initiate the output layer sequence. At each target cell state, attention is applied with the encoder RNN state and combined with the current hidden state to produce the prediction of next target token. This predictions are then fed back to the target RNN. Attention mechanism helps us to overcome the fixed length restriction of encoder-decoder sequence and allows us to process variable length between input and output sequence. Attention uses encoder state and pass it to the decoder cell to give particular attention to the start of an output layer sequence. The encoder uses an initial state to tell the decoder what it is supposed to generate. Effectively, the decoder learns to generate target tokens, conditioned on the input sequence. Sigmoidal optimization is used to optimize the prediction.
Result Analysis
Training parallel corpus had 18805 lines of annotated code in it. The training model is executed several times with different training parameters. During the final training process, 500 validation data is used to generate the recurrent neural model, which is 3% of the training data. We run the training with epoch value of 10 with a batch size of 64. After finishing the training, the accuracy of the generated model using validation data from the source corpus was 74.40% (Fig. FIGREF17).
Although the generated code is incoherent and often predict wrong code token, this is expected because of the limited amount of training data. LSTM generally requires a more extensive set of data (100k+ in such scenario) to build a more accurate model. The incoherence can be resolved by incorporating coding syntax tree model in future. For instance–
"define the method tzname with 2 arguments: self and dt."
is translated into–
def __init__ ( self , regex ) :.
The translator is successfully generating the whole codeline automatically but missing the noun part (parameter and function name) part of the syntax.
Conclusion & Future Works
The main advantage of translating to a programming language is - it has a concrete and strict lexical and grammatical structure which human languages lack. The aim of this paper was to make the text-to-code framework work for general purpose programming language, primarily Python. In later phase, phrase-based word embedding can be incorporated for improved vocabulary mapping. To get more accurate target code for each line, Abstract Syntax Tree(AST) can be beneficial.
The contribution of this research is a machine learning model which can turn the human expression into coding expressions. This paper also discusses available methods which convert natural language to programming language successfully in fixed or tightly bounded linguistic paradigm. Approaching this problem using machine learning will give us the opportunity to explore the possibility of unified programming interface as well in the future.
Acknowledgment
We would like to thank Dr. Khandaker Tabin Hasan, Head of the Depertment of Computer Science, American International University-Bangladesh for his inspiration and encouragement in all of our research works. Also, thanks to Future Technology Conference - 2019 committee for partially supporting us to join the conference and one of our colleague - Faheem Abrar, Software Developer for his thorough review and comments on this research work and supporting us by providing fund. | validation data |
e96adf8466e67bd19f345578d5a6dc68fd0279a1 | e96adf8466e67bd19f345578d5a6dc68fd0279a1_0 | Q: Is text-to-image synthesis trained is suppervized or unsuppervized manner?
Text: Introduction
“ (GANs), and the variations that are now being proposed is the most interesting idea in the last 10 years in ML, in my opinion.” (2016)
– Yann LeCun
A picture is worth a thousand words! While written text provide efficient, effective, and concise ways for communication, visual content, such as images, is a more comprehensive, accurate, and intelligible method of information sharing and understanding. Generation of images from text descriptions, i.e. text-to-image synthesis, is a complex computer vision and machine learning problem that has seen great progress over recent years. Automatic image generation from natural language may allow users to describe visual elements through visually-rich text descriptions. The ability to do so effectively is highly desirable as it could be used in artificial intelligence applications such as computer-aided design, image editing BIBREF0, BIBREF1, game engines for the development of the next generation of video gamesBIBREF2, and pictorial art generation BIBREF3.
Introduction ::: blackTraditional Learning Based Text-to-image Synthesis
In the early stages of research, text-to-image synthesis was mainly carried out through a search and supervised learning combined process BIBREF4, as shown in Figure FIGREF4. In order to connect text descriptions to images, one could use correlation between keywords (or keyphrase) & images that identifies informative and “picturable” text units; then, these units would search for the most likely image parts conditioned on the text, eventually optimizing the picture layout conditioned on both the text and the image parts. Such methods often integrated multiple artificial intelligence key components, including natural language processing, computer vision, computer graphics, and machine learning.
The major limitation of the traditional learning based text-to-image synthesis approaches is that they lack the ability to generate new image content; they can only change the characteristics of the given/training images. Alternatively, research in generative models has advanced significantly and delivers solutions to learn from training images and produce new visual content. For example, Attribute2Image BIBREF5 models each image as a composite of foreground and background. In addition, a layered generative model with disentangled latent variables is learned, using a variational auto-encoder, to generate visual content. Because the learning is customized/conditioned by given attributes, the generative models of Attribute2Image can generate images with respect to different attributes, such as gender, hair color, age, etc., as shown in Figure FIGREF5.
Introduction ::: GAN Based Text-to-image Synthesis
Although generative model based text-to-image synthesis provides much more realistic image synthesis results, the image generation is still conditioned by the limited attributes. In recent years, several papers have been published on the subject of text-to-image synthesis. Most of the contributions from these papers rely on multimodal learning approaches that include generative adversarial networks and deep convolutional decoder networks as their main drivers to generate entrancing images from text BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11.
First introduced by Ian Goodfellow et al. BIBREF9, generative adversarial networks (GANs) consist of two neural networks paired with a discriminator and a generator. These two models compete with one another, with the generator attempting to produce synthetic/fake samples that will fool the discriminator and the discriminator attempting to differentiate between real (genuine) and synthetic samples. Because GANs' adversarial training aims to cause generators to produce images similar to the real (training) images, GANs can naturally be used to generate synthetic images (image synthesis), and this process can even be customized further by using text descriptions to specify the types of images to generate, as shown in Figure FIGREF6.
Much like text-to-speech and speech-to-text conversion, there exists a wide variety of problems that text-to-image synthesis could solve in the computer vision field specifically BIBREF8, BIBREF12. Nowadays, researchers are attempting to solve a plethora of computer vision problems with the aid of deep convolutional networks, generative adversarial networks, and a combination of multiple methods, often called multimodal learning methods BIBREF8. For simplicity, multiple learning methods will be referred to as multimodal learning hereafter BIBREF13. Researchers often describe multimodal learning as a method that incorporates characteristics from several methods, algorithms, and ideas. This can include ideas from two or more learning approaches in order to create a robust implementation to solve an uncommon problem or improve a solution BIBREF8, BIBREF14, BIBREF15, BIBREF16, BIBREF17.
black In this survey, we focus primarily on reviewing recent works that aim to solve the challenge of text-to-image synthesis using generative adversarial networks (GANs). In order to provide a clear roadmap, we propose a taxonomy to summarize reviewed GANs into four major categories. Our review will elaborate the motivations of methods in each category, analyze typical models, their network architectures, and possible drawbacks for further improvement. The visual abstract of the survey and the list of reviewed GAN frameworks is shown in Figure FIGREF8.
black The remainder of the survey is organized as follows. Section 2 presents a brief summary of existing works on subjects similar to that of this paper and highlights the key distinctions making ours unique. Section 3 gives a short introduction to GANs and some preliminary concepts related to image generation, as they are the engines that make text-to-image synthesis possible and are essential building blocks to achieve photo-realistic images from text descriptions. Section 4 proposes a taxonomy to summarize GAN based text-to-image synthesis, discusses models and architectures of novel works focused solely on text-to-image synthesis. This section will also draw key contributions from these works in relation to their applications. Section 5 reviews GAN based text-to-image synthesis benchmarks, performance metrics, and comparisons, including a simple review of GANs for other applications. In section 6, we conclude with a brief summary and outline ideas for future interesting developments in the field of text-to-image synthesis.
Related Work
With the growth and success of GANs, deep convolutional decoder networks, and multimodal learning methods, these techniques were some of the first procedures which aimed to solve the challenge of image synthesis. Many engineers and scientists in computer vision and AI have contributed through extensive studies and experiments, with numerous proposals and publications detailing their contributions. Because GANs, introduced by BIBREF9, are emerging research topics, their practical applications to image synthesis are still in their infancy. Recently, many new GAN architectures and designs have been proposed to use GANs for different applications, e.g. using GANs to generate sentimental texts BIBREF18, or using GANs to transform natural images into cartoons BIBREF19.
Although GANs are becoming increasingly popular, very few survey papers currently exist to summarize and outline contemporaneous technical innovations and contributions of different GAN architectures BIBREF20, BIBREF21. Survey papers specifically attuned to analyzing different contributions to text-to-image synthesis using GANs are even more scarce. We have thus found two surveys BIBREF6, BIBREF7 on image synthesis using GANs, which are the two most closely related publications to our survey objective. In the following paragraphs, we briefly summarize each of these surveys and point out how our objectives differ from theirs.
In BIBREF6, the authors provide an overview of image synthesis using GANs. In this survey, the authors discuss the motivations for research on image synthesis and introduce some background information on the history of GANs, including a section dedicated to core concepts of GANs, namely generators, discriminators, and the min-max game analogy, and some enhancements to the original GAN model, such as conditional GANs, addition of variational auto-encoders, etc.. In this survey, we will carry out a similar review of the background knowledge because the understanding of these preliminary concepts is paramount for the rest of the paper. Three types of approaches for image generation are reviewed, including direct methods (single generator and discriminator), hierarchical methods (two or more generator-discriminator pairs, each with a different goal), and iterative methods (each generator-discriminator pair generates a gradually higher-resolution image). Following the introduction, BIBREF6 discusses methods for text-to-image and image-to-image synthesis, respectively, and also describes several evaluation metrics for synthetic images, including inception scores and Frechet Inception Distance (FID), and explains the significance of the discriminators acting as learned loss functions as opposed to fixed loss functions.
Different from the above survey, which has a relatively broad scope in GANs, our objective is heavily focused on text-to-image synthesis. Although this topic, text-to-image synthesis, has indeed been covered in BIBREF6, they did so in a much less detailed fashion, mostly listing the many different works in a time-sequential order. In comparison, we will review several representative methods in the field and outline their models and contributions in detail.
Similarly to BIBREF6, the second survey paper BIBREF7 begins with a standard introduction addressing the motivation of image synthesis and the challenges it presents followed by a section dedicated to core concepts of GANs and enhancements to the original GAN model. In addition, the paper covers the review of two types of applications: (1) unconstrained applications of image synthesis such as super-resolution, image inpainting, etc., and (2) constrained image synthesis applications, namely image-to-image, text-to-image, and sketch-to image, and also discusses image and video editing using GANs. Again, the scope of this paper is intrinsically comprehensive, while we focus specifically on text-to-image and go into more detail regarding the contributions of novel state-of-the-art models.
Other surveys have been published on related matters, mainly related to the advancements and applications of GANs BIBREF22, BIBREF23, but we have not found any prior works which focus specifically on text-to-image synthesis using GANs. To our knowledge, this is the first paper to do so.
black
Preliminaries and Frameworks
In this section, we first introduce preliminary knowledge of GANs and one of its commonly used variants, conditional GAN (i.e. cGAN), which is the building block for many GAN based text-to-image synthesis models. After that, we briefly separate GAN based text-to-image synthesis into two types, Simple GAN frameworks vs. Advanced GAN frameworks, and discuss why advanced GAN architecture for image synthesis.
black Notice that the simple vs. advanced GAN framework separation is rather too brief, our taxonomy in the next section will propose a taxonomy to summarize advanced GAN frameworks into four categories, based on their objective and designs.
Preliminaries and Frameworks ::: Generative Adversarial Neural Network
Before moving on to a discussion and analysis of works applying GANs for text-to-image synthesis, there are some preliminary concepts, enhancements of GANs, datasets, and evaluation metrics that are present in some of the works described in the next section and are thus worth introducing.
As stated previously, GANs were introduced by Ian Goodfellow et al. BIBREF9 in 2014, and consist of two deep neural networks, a generator and a discriminator, which are trained independently with conflicting goals: The generator aims to generate samples closely related to the original data distribution and fool the discriminator, while the discriminator aims to distinguish between samples from the generator model and samples from the true data distribution by calculating the probability of the sample coming from either source. A conceptual view of the generative adversarial network (GAN) architecture is shown in Figure FIGREF11.
The training of GANs is an iterative process that, with each iteration, updates the generator and the discriminator with the goal of each defeating the other. leading each model to become increasingly adept at its specific task until a threshold is reached. This is analogous to a min-max game between the two models, according to the following equation:
In Eq. (DISPLAY_FORM10), $x$ denotes a multi-dimensional sample, e.g., an image, and $z$ denotes a multi-dimensional latent space vector, e.g., a multidimensional data point following a predefined distribution function such as that of normal distributions. $D_{\theta _d}()$ denotes a discriminator function, controlled by parameters $\theta _d$, which aims to classify a sample into a binary space. $G_{\theta _g}()$ denotes a generator function, controlled by parameters $\theta _g$, which aims to generate a sample from some latent space vector. For example, $G_{\theta _g}(z)$ means using a latent vector $z$ to generate a synthetic/fake image, and $D_{\theta _d}(x)$ means to classify an image $x$ as binary output (i.e. true/false or 1/0). In the GAN setting, the discriminator $D_{\theta _d}()$ is learned to distinguish a genuine/true image (labeled as 1) from fake images (labeled as 0). Therefore, given a true image $x$, the ideal output from the discriminator $D_{\theta _d}(x)$ would be 1. Given a fake image generated from the generator $G_{\theta _g}(z)$, the ideal prediction from the discriminator $D_{\theta _d}(G_{\theta _g}(z))$ would be 0, indicating the sample is a fake image.
Following the above definition, the $\min \max $ objective function in Eq. (DISPLAY_FORM10) aims to learn parameters for the discriminator ($\theta _d$) and generator ($\theta _g$) to reach an optimization goal: The discriminator intends to differentiate true vs. fake images with maximum capability $\max _{\theta _d}$ whereas the generator intends to minimize the difference between a fake image vs. a true image $\min _{\theta _g}$. In other words, the discriminator sets the characteristics and the generator produces elements, often images, iteratively until it meets the attributes set forth by the discriminator. GANs are often used with images and other visual elements and are notoriously efficient in generating compelling and convincing photorealistic images. Most recently, GANs were used to generate an original painting in an unsupervised fashion BIBREF24. The following sections go into further detail regarding how the generator and discriminator are trained in GANs.
Generator - In image synthesis, the generator network can be thought of as a mapping from one representation space (latent space) to another (actual data) BIBREF21. When it comes to image synthesis, all of the images in the data space fall into some distribution in a very complex and high-dimensional feature space. Sampling from such a complex space is very difficult, so GANs instead train a generator to create synthetic images from a much more simple feature space (usually random noise) called the latent space. The generator network performs up-sampling of the latent space and is usually a deep neural network consisting of several convolutional and/or fully connected layers BIBREF21. The generator is trained using gradient descent to update the weights of the generator network with the aim of producing data (in our case, images) that the discriminator classifies as real.
Discriminator - The discriminator network can be thought of as a mapping from image data to the probability of the image coming from the real data space, and is also generally a deep neural network consisting of several convolution and/or fully connected layers. However, the discriminator performs down-sampling as opposed to up-sampling. Like the generator, it is trained using gradient descent but its goal is to update the weights so that it is more likely to correctly classify images as real or fake.
In GANs, the ideal outcome is for both the generator's and discriminator's cost functions to converge so that the generator produces photo-realistic images that are indistinguishable from real data, and the discriminator at the same time becomes an expert at differentiating between real and synthetic data. This, however, is not possible since a reduction in cost of one model generally leads to an increase in cost of the other. This phenomenon makes training GANs very difficult, and training them simultaneously (both models performing gradient descent in parallel) often leads to a stable orbit where neither model is able to converge. To combat this, the generator and discriminator are often trained independently. In this case, the GAN remains the same, but there are different training stages. In one stage, the weights of the generator are kept constant and gradient descent updates the weights of the discriminator, and in the other stage the weights of the discriminator are kept constant while gradient descent updates the weights of the generator. This is repeated for some number of epochs until a desired low cost for each model is reached BIBREF25.
Preliminaries and Frameworks ::: cGAN: Conditional GAN
Conditional Generative Adversarial Networks (cGAN) are an enhancement of GANs proposed by BIBREF26 shortly after the introduction of GANs by BIBREF9. The objective function of the cGAN is defined in Eq. (DISPLAY_FORM13) which is very similar to the GAN objective function in Eq. (DISPLAY_FORM10) except that the inputs to both discriminator and generator are conditioned by a class label $y$.
The main technical innovation of cGAN is that it introduces an additional input or inputs to the original GAN model, allowing the model to be trained on information such as class labels or other conditioning variables as well as the samples themselves, concurrently. Whereas the original GAN was trained only with samples from the data distribution, resulting in the generated sample reflecting the general data distribution, cGAN enables directing the model to generate more tailored outputs.
In Figure FIGREF14, the condition vector is the class label (text string) "Red bird", which is fed to both the generator and discriminator. It is important, however, that the condition vector is related to the real data. If the model in Figure FIGREF14 was trained with the same set of real data (red birds) but the condition text was "Yellow fish", the generator would learn to create images of red birds when conditioned with the text "Yellow fish".
Note that the condition vector in cGAN can come in many forms, such as texts, not just limited to the class label. Such a unique design provides a direct solution to generate images conditioned by predefined specifications. As a result, cGAN has been used in text-to-image synthesis since the very first day of its invention although modern approaches can deliver much better text-to-image synthesis results.
black
Preliminaries and Frameworks ::: Simple GAN Frameworks for Text-to-Image Synthesis
In order to generate images from text, one simple solution is to employ the conditional GAN (cGAN) designs and add conditions to the training samples, such that the GAN is trained with respect to the underlying conditions. Several pioneer works have followed similar designs for text-to-image synthesis.
black An essential disadvantage of using cGAN for text-to-image synthesis is that that it cannot handle complicated textual descriptions for image generation, because cGAN uses labels as conditions to restrict the GAN inputs. If the text inputs have multiple keywords (or long text descriptions) they cannot be used simultaneously to restrict the input. Instead of using text as conditions, another two approaches BIBREF8, BIBREF16 use text as input features, and concatenate such features with other features to train discriminator and generator, as shown in Figure FIGREF15(b) and (c). To ensure text being used as GAN input, a feature embedding or feature representation learning BIBREF29, BIBREF30 function $\varphi ()$ is often introduced to convert input text as numeric features, which are further concatenated with other features to train GANs.
black
Preliminaries and Frameworks ::: Advanced GAN Frameworks for Text-to-Image Synthesis
Motivated by the GAN and conditional GAN (cGAN) design, many GAN based frameworks have been proposed to generate images, with different designs and architectures, such as using multiple discriminators, using progressively trained discriminators, or using hierarchical discriminators. Figure FIGREF17 outlines several advanced GAN frameworks in the literature. In addition to these frameworks, many news designs are being proposed to advance the field with rather sophisticated designs. For example, a recent work BIBREF37 proposes to use a pyramid generator and three independent discriminators, blackeach focusing on a different aspect of the images, to lead the generator towards creating images that are photo-realistic on multiple levels. Another recent publication BIBREF38 proposes to use discriminator to measure semantic relevance between image and text instead of class prediction (like most discriminator in GANs does), resulting a new GAN structure outperforming text conditioned auxiliary classifier (TAC-GAN) BIBREF16 and generating diverse, realistic, and relevant to the input text regardless of class.
black In the following section, we will first propose a taxonomy that summarizes advanced GAN frameworks for text-to-image synthesis, and review most recent proposed solutions to the challenge of generating photo-realistic images conditioned on natural language text descriptions using GANs. The solutions we discuss are selected based on relevance and quality of contributions. Many publications exist on the subject of image-generation using GANs, but in this paper we focus specifically on models for text-to-image synthesis, with the review emphasizing on the “model” and “contributions” for text-to-image synthesis. At the end of this section, we also briefly review methods using GANs for other image-synthesis applications.
black
Text-to-Image Synthesis Taxonomy and Categorization
In this section, we propose a taxonomy to summarize advanced GAN based text-to-image synthesis frameworks, as shown in Figure FIGREF24. The taxonomy organizes GAN frameworks into four categories, including Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANs, and Motion Enhancement GAGs. Following the proposed taxonomy, each subsection will introduce several typical frameworks and address their techniques of using GANS to solve certain aspects of the text-to-mage synthesis challenges.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: GAN based Text-to-Image Synthesis Taxonomy
Although the ultimate goal of Text-to-Image synthesis is to generate images closely related to the textual descriptions, the relevance of the images to the texts are often validated from different perspectives, due to the inherent diversity of human perceptions. For example, when generating images matching to the description “rose flowers”, some users many know the exact type of flowers they like and intend to generate rose flowers with similar colors. Other users, may seek to generate high quality rose flowers with a nice background (e.g. garden). The third group of users may be more interested in generating flowers similar to rose but with different colors and visual appearance, e.g. roses, begonia, and peony. The fourth group of users may want to not only generate flower images, but also use them to form a meaningful action, e.g. a video clip showing flower growth, performing a magic show using those flowers, or telling a love story using the flowers.
blackFrom the text-to-Image synthesis point of view, the first group of users intend to precisely control the semantic of the generated images, and their goal is to match the texts and images at the semantic level. The second group of users are more focused on the resolutions and the qualify of the images, in addition to the requirement that the images and texts are semantically related. For the third group of users, their goal is to diversify the output images, such that their images carry diversified visual appearances and are also semantically related. The fourth user group adds a new dimension in image synthesis, and aims to generate sequences of images which are coherent in temporal order, i.e. capture the motion information.
black Based on the above descriptions, we categorize GAN based Text-to-Image Synthesis into a taxonomy with four major categories, as shown in Fig. FIGREF24.
Semantic Enhancement GANs: Semantic enhancement GANs represent pioneer works of GAN frameworks for text-to-image synthesis. The main focus of the GAN frameworks is to ensure that the generated images are semantically related to the input texts. This objective is mainly achieved by using a neural network to encode texts as dense features, which are further fed to a second network to generate images matching to the texts.
Resolution Enhancement GANs: Resolution enhancement GANs mainly focus on generating high qualify images which are semantically matched to the texts. This is mainly achieved through a multi-stage GAN framework, where the outputs from earlier stage GANs are fed to the second (or later) stage GAN to generate better qualify images.
Diversity Enhancement GANs: Diversity enhancement GANs intend to diversify the output images, such that the generated images are not only semantically related but also have different types and visual appearance. This objective is mainly achieved through an additional component to estimate semantic relevance between generated images and texts, in order to maximize the output diversity.
Motion Enhancement GANs: Motion enhancement GANs intend to add a temporal dimension to the output images, such that they can form meaningful actions with respect to the text descriptions. This goal mainly achieved though a two-step process which first generates images matching to the “actions” of the texts, followed by a mapping or alignment procedure to ensure that images are coherent in the temporal order.
black In the following, we will introduce how these GAN frameworks evolve for text-to-image synthesis, and will also review some typical methods of each category.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs
Semantic relevance is one the of most important criteria of the text-to-image synthesis. For most GNAs discussed in this survey, they are required to generate images semantically related to the text descriptions. However, the semantic relevance is a rather subjective measure, and images are inherently rich in terms of its semantics and interpretations. Therefore, many GANs are further proposed to enhance the text-to-image synthesis from different perspectives. In this subsection, we will review several classical approaches which are commonly served as text-to-image synthesis baseline.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs ::: DC-GAN
Deep convolution generative adversarial network (DC-GAN) BIBREF8 represents the pioneer work for text-to-image synthesis using GANs. Its main goal is to train a deep convolutional generative adversarial network (DC-GAN) on text features. During this process these text features are encoded by another neural network. This neural network is a hybrid convolutional recurrent network at the character level. Concurrently, both neural networks have also feed-forward inference in the way they condition text features. Generating realistic images automatically from natural language text is the motivation of several of the works proposed in this computer vision field. However, actual artificial intelligence (AI) systems are far from achieving this task BIBREF8, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. Lately, recurrent neural networks led the way to develop frameworks that learn discriminatively on text features. At the same time, generative adversarial networks (GANs) began recently to show some promise on generating compelling images of a whole host of elements including but not limited to faces, birds, flowers, and non-common images such as room interiorsBIBREF8. DC-GAN is a multimodal learning model that attempts to bridge together both of the above mentioned unsupervised machine learning algorithms, the recurrent neural networks (RNN) and generative adversarial networks (GANs), with the sole purpose of speeding the generation of text-to-image synthesis.
black Deep learning shed some light to some of the most sophisticated advances in natural language representation, image synthesis BIBREF7, BIBREF8, BIBREF43, BIBREF35, and classification of generic data BIBREF44. However, a bulk of the latest breakthroughs in deep learning and computer vision were related to supervised learning BIBREF8. Even though natural language and image synthesis were part of several contributions on the supervised side of deep learning, unsupervised learning saw recently a tremendous rise in input from the research community specially on two subproblems: text-based natural language and image synthesis BIBREF45, BIBREF14, BIBREF8, BIBREF46, BIBREF47. These subproblems are typically subdivided as focused research areas. DC-GAN's contributions are mainly driven by these two research areas. In order to generate plausible images from natural language, DC-GAN contributions revolve around developing a straightforward yet effective GAN architecture and training strategy that allows natural text to image synthesis. These contributions are primarily tested on the Caltech-UCSD Birds and Oxford-102 Flowers datasets. Each image in these datasets carry five text descriptions. These text descriptions were created by the research team when setting up the evaluation environment. The DC-GANs model is subsequently trained on several subcategories. Subcategories in this research represent the training and testing sub datasets. The performance shown by these experiments display a promising yet effective way to generate images from textual natural language descriptions BIBREF8.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs ::: DC-GAN Extensions
Following the pioneer DC-GAN framework BIBREF8, many researches propose revised network structures (e.g. different discriminaotrs) in order to improve images with better semantic relevance to the texts. Based on the deep convolutional adversarial network (DC-GAN) network architecture, GAN-CLS with image-text matching discriminator, GAN-INT learned with text manifold interpolation and GAN-INT-CLS which combines both are proposed to find semantic match between text and image. Similar to the DC-GAN architecture, an adaptive loss function (i.e. Perceptual Loss BIBREF48) is proposed for semantic image synthesis which can synthesize a realistic image that not only matches the target text description but also keep the irrelavant features(e.g. background) from source images BIBREF49. Regarding to the Perceptual Losses, three loss functions (i.e. Pixel reconstruction loss, Activation reconstruction loss and Texture reconstruction loss) are proposed in BIBREF50 in which they construct the network architectures based on the DC-GAN, i.e. GAN-INT-CLS-Pixel, GAN-INT-CLS-VGG and GAN-INT-CLS-Gram with respect to three losses. In BIBREF49, a residual transformation unit is added in the network to retain similar structure of the source image.
black Following the BIBREF49 and considering the features in early layers address background while foreground is obtained in latter layers in CNN, a pair of discriminators with different architectures (i.e. Paired-D GAN) is proposed to synthesize background and foreground from a source image seperately BIBREF51. Meanwhile, the skip-connection in the generator is employed to more precisely retain background information in the source image.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs ::: MC-GAN
When synthesising images, most text-to-image synthesis methods consider each output image as one single unit to characterize its semantic relevance to the texts. This is likely problematic because most images naturally consist of two crucial components: foreground and background. Without properly separating these two components, it's hard to characterize the semantics of an image if the whole image is treated as a single unit without proper separation.
black In order to enhance the semantic relevance of the images, a multi-conditional GAN (MC-GAN) BIBREF52 is proposed to synthesize a target image by combining the background of a source image and a text-described foreground object which does not exist in the source image. A unique feature of MC-GAN is that it proposes a synthesis block in which the background feature is extracted from the given image without non-linear function (i.e. only using convolution and batch normalization) and the foreground feature is the feature map from the previous layer.
black Because MC-GAN is able to properly model the background and foreground of the generated images, a unique strength of MC-GAN is that users are able to provide a base image and MC-GAN is able to preserve the background information of the base image to generate new images. black
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs
Due to the fact that training GANs will be much difficult when generating high-resolution images, a two stage GAN (i.e. stackGAN) is proposed in which rough images(i.e. low-resolution images) are generated in stage-I and refined in stage-II. To further improve the quality of generated images, the second version of StackGAN (i.e. Stack++) is proposed to use multi-stage GANs to generate multi-scale images. A color-consistency regularization term is also added into the loss to keep the consistency of images in different scales.
black While stackGAN and StackGAN++ are both built on the global sentence vector, AttnGAN is proposed to use attention mechanism (i.e. Deep Attentional Multimodal Similarity Model (DAMSM)) to model the multi-level information (i.e. word level and sentence level) into GANs. In the following, StackGAN, StackGAN++ and AttnGAN will be explained in detail.
black Recently, Dynamic Memory Generative Adversarial Network (i.e. DM-GAN)BIBREF53 which uses a dynamic memory component is proposed to focus on refiningthe initial generated image which is the key to the success of generating high quality images.
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: StackGAN
In 2017, Zhang et al. proposed a model for generating photo-realistic images from text descriptions called StackGAN (Stacked Generative Adversarial Network) BIBREF33. In their work, they define a two-stage model that uses two cascaded GANs, each corresponding to one of the stages. The stage I GAN takes a text description as input, converts the text description to a text embedding containing several conditioning variables, and generates a low-quality 64x64 image with rough shapes and colors based on the computed conditioning variables. The stage II GAN then takes this low-quality stage I image as well as the same text embedding and uses the conditioning variables to correct and add more detail to the stage I result. The output of stage II is a photorealistic 256$times$256 image that resembles the text description with compelling accuracy.
One major contribution of StackGAN is the use of cascaded GANs for text-to-image synthesis through a sketch-refinement process. By conditioning the stage II GAN on the image produced by the stage I GAN and text description, the stage II GAN is able to correct defects in the stage I output, resulting in high-quality 256x256 images. Prior works have utilized “stacked” GANs to separate the image generation process into structure and style BIBREF42, multiple stages each generating lower-level representations from higher-level representations of the previous stage BIBREF35, and multiple stages combined with a laplacian pyramid approach BIBREF54, which was introduced for image compression by P. Burt and E. Adelson in 1983 and uses the differences between consecutive down-samples of an original image to reconstruct the original image from its down-sampled version BIBREF55. However, these works did not use text descriptions to condition their generator models.
Conditioning Augmentation is the other major contribution of StackGAN. Prior works transformed the natural language text description into a fixed text embedding containing static conditioning variables which were fed to the generator BIBREF8. StackGAN does this and then creates a Gaussian distribution from the text embedding and randomly selects variables from the Gaussian distribution to add to the set of conditioning variables during training. This encourages robustness by introducing small variations to the original text embedding for a particular training image while keeping the training image that the generated output is compared to the same. The result is that the trained model produces more diverse images in the same distribution when using Conditioning Augmentation than the same model using a fixed text embedding BIBREF33.
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: StackGAN++
Proposed by the same users as StackGAN, StackGAN++ is also a stacked GAN model, but organizes the generators and discriminators in a “tree-like” structure BIBREF47 with multiple stages. The first stage combines a noise vector and conditioning variables (with Conditional Augmentation introduced in BIBREF33) for input to the first generator, which generates a low-resolution image, 64$\times $64 by default (this can be changed depending on the desired number of stages). Each following stage uses the result from the previous stage and the conditioning variables to produce gradually higher-resolution images. These stages do not use the noise vector again, as the creators assume that the randomness it introduces is already preserved in the output of the first stage. The final stage produces a 256$\times $256 high-quality image.
StackGAN++ introduces the joint conditional and unconditional approximation in their designs BIBREF47. The discriminators are trained to calculate the loss between the image produced by the generator and the conditioning variables (measuring how accurately the image represents the description) as well as the loss between the image and real images (probability of the image being real or fake). The generators then aim to minimize the sum of these losses, improving the final result.
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: AttnGAN
Attentional Generative Adversarial Network (AttnGAN) BIBREF10 is very similar, in terms of its structure, to StackGAN++ BIBREF47, discussed in the previous section, but some novel components are added. Like previous works BIBREF56, BIBREF8, BIBREF33, BIBREF47, a text encoder generates a text embedding with conditioning variables based on the overall sentence. Additionally, the text encoder generates a separate text embedding with conditioning variables based on individual words. This process is optimized to produce meaningful variables using a bidirectional recurrent neural network (BRNN), more specifically bidirectional Long Short Term Memory (LSTM) BIBREF57, which, for each word in the description, generates conditions based on the previous word as well as the next word (bidirectional). The first stage of AttnGAN generates a low-resolution image based on the sentence-level text embedding and random noise vector. The output is fed along with the word-level text embedding to an “attention model”, which matches the word-level conditioning variables to regions of the stage I image, producing a word-context matrix. This is then fed to the next stage of the model along with the raw previous stage output. Each consecutive stage works in the same manner, but produces gradually higher-resolution images conditioned on the previous stage.
Two major contributions were introduced in AttnGAN: the attentional generative network and the Deep Attentional Multimodal Similarity Model (DAMSM) BIBREF47. The attentional generative network matches specific regions of each stage's output image to conditioning variables from the word-level text embedding. This is a very worthy contribution, allowing each consecutive stage to focus on specific regions of the image independently, adding “attentional” details region by region as opposed to the whole image. The DAMSM is also a key feature introduced by AttnGAN, which is used after the result of the final stage to calculate the similarity between the generated image and the text embedding at both the sentence level and the more fine-grained word level. Table TABREF48 shows scores from different metrics for StackGAN, StackGAN++, AttnGAN, and HDGAN on the CUB, Oxford, and COCO datasets. The table shows that AttnGAN outperforms the other models in terms of IS on the CUB dataset by a small amount and greatly outperforms them on the COCO dataset.
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: HDGAN
Hierarchically-nested adversarial network (HDGAN) is a method proposed by BIBREF36, and its main objective is to tackle the difficult problem of dealing with photographic images from semantic text descriptions. These semantic text descriptions are applied on images from diverse datasets. This method introduces adversarial objectives nested inside hierarchically oriented networks BIBREF36. Hierarchical networks helps regularize mid-level manifestations. In addition to regularize mid-level manifestations, it assists the training of the generator in order to capture highly complex still media elements. These elements are captured in statistical order to train the generator based on settings extracted directly from the image. The latter is an ideal scenario. However, this paper aims to incorporate a single-stream architecture. This single-stream architecture functions as the generator that will form an optimum adaptability towards the jointed discriminators. Once jointed discriminators are setup in an optimum manner, the single-stream architecture will then advance generated images to achieve a much higher resolution BIBREF36.
The main contributions of the HDGANs include the introduction of a visual-semantic similarity measure BIBREF36. This feature will aid in the evaluation of the consistency of generated images. In addition to checking the consistency of generated images, one of the key objectives of this step is to test the logical consistency of the end product BIBREF36. The end product in this case would be images that are semantically mapped from text-based natural language descriptions to each area on the picture e.g. a wing on a bird or petal on a flower. Deep learning has created a multitude of opportunities and challenges for researchers in the computer vision AI field. Coupled with GAN and multimodal learning architectures, this field has seen tremendous growth BIBREF8, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. Based on these advancements, HDGANs attempt to further extend some desirable and less common features when generating images from textual natural language BIBREF36. In other words, it takes sentences and treats them as a hierarchical structure. This has some positive and negative implications in most cases. For starters, it makes it more complex to generate compelling images. However, one of the key benefits of this elaborate process is the realism obtained once all processes are completed. In addition, one common feature added to this process is the ability to identify parts of sentences with bounding boxes. If a sentence includes common characteristics of a bird, it will surround the attributes of such bird with bounding boxes. In practice, this should happen if the desired image have other elements such as human faces (e.g. eyes, hair, etc), flowers (e.g. petal size, color, etc), or any other inanimate object (e.g. a table, a mug, etc). Finally, HDGANs evaluated some of its claims on common ideal text-to-image datasets such as CUB, COCO, and Oxford-102 BIBREF8, BIBREF36, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. These datasets were first utilized on earlier works BIBREF8, and most of them sport modified features such image annotations, labels, or descriptions. The qualitative and quantitative results reported by researchers in this study were far superior of earlier works in this same field of computer vision AI.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs
In this subsection, we introduce text-to-image synthesis methods which try to maximize the diversity of the output images, based on the text descriptions.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: AC-GAN
Two issues arise in the traditional GANs BIBREF58 for image synthesis: (1) scalabilirty problem: traditional GANs cannot predict a large number of image categories; and (2) diversity problem: images are often subject to one-to-many mapping, so one image could be labeled as different tags or being described using different texts. To address these problems, GAN conditioned on additional information, e.g. cGAN, is an alternative solution. However, although cGAN and many previously introduced approaches are able to generate images with respect to the text descriptions, they often output images with similar types and visual appearance.
black Slightly different from the cGAN, auxiliary classifier GANs (AC-GAN) BIBREF27 proposes to improve the diversity of output images by using an auxiliary classifier to control output images. The overall structure of AC-GAN is shown in Fig. FIGREF15(c). In AC-GAN, every generated image is associated with a class label, in addition to the true/fake label which are commonly used in GAN or cGAN. The discriminator of AC-GAN not only outputs a probability distribution over sources (i.e. whether the image is true or fake), it also output a probability distribution over the class label (i.e. predict which class the image belong to).
black By using an auxiliary classifier layer to predict the class of the image, AC-GAN is able to use the predicted class labels of the images to ensure that the output consists of images from different classes, resulting in diversified synthesis images. The results show that AC-GAN can generate images with high diversity.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: TAC-GAN
Building on the AC-GAN, TAC-GAN BIBREF59 is proposed to replace the class information with textual descriptions as the input to perform the task of text to image synthesis. The architecture of TAC-GAN is shown in Fig. FIGREF15(d), which is similar to AC-GAN. Overall, the major difference between TAC-GAN and AC-GAN is that TAC-GAN conditions the generated images on text descriptions instead of on a class label. This design makes TAC-GAN more generic for image synthesis.
black For TAC-GAN, it imposes restrictions on generated images in both texts and class labels. The input vector of TAC-GAN's generative network is built based on a noise vector and embedded vector representation of textual descriptions. The discriminator of TAC-GAN is similar to that of the AC-GAN, which not only predicts whether the image is fake or not, but also predicts the label of the images. A minor difference of TAC-GAN's discriminator, compared to that of the AC-GAN, is that it also receives text information as input before performing its classification.
black The experiments and validations, on the Oxford-102 flowers dataset, show that the results produced by TAC-GAN are “slightly better” that other approaches, including GAN-INT-CLS and StackGAN.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: Text-SeGAN
In order to improve the diversity of the output images, both AC-GAN and TAC-GAN's discriminators predict class labels of the synthesised images. This process likely enforces the semantic diversity of the images, but class labels are inherently restrictive in describing image semantics, and images described by text can be matched to multiple labels. Therefore, instead of predicting images' class labels, an alternative solution is to directly quantify their semantic relevance.
black The architecture of Text-SeGAN is shown in Fig. FIGREF15(e). In order to directly quantify semantic relevance, Text-SeGAN BIBREF28 adds a regression layer to estimate the semantic relevance between the image and text instead of a classifier layer of predicting labels. The estimated semantic reference is a fractional value ranging between 0 and 1, with a higher value reflecting better semantic relevance between the image and text. Due to this unique design, an inherent advantage of Text-SeGAN is that the generated images are not limited to certain classes and are semantically matching to the text input.
black Experiments and validations, on Oxford-102 flower dataset, show that Text-SeGAN can generate diverse images that are semantically relevant to the input text. In addition, the results of Text-SeGAN show improved inception score compared to other approaches, including GAN-INT-CLS, StackGAN, TAC-GAN, and HDGAN.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: MirrorGAN and Scene Graph GAN
Due to the inherent complexity of the visual images, and the diversity of text descriptions (i.e. same words could imply different meanings), it is difficulty to precisely match the texts to the visual images at the semantic levels. For most methods we have discussed so far, they employ a direct text to image generation process, but there is no validation about how generated images comply with the text in a reverse fashion.
black To ensure the semantic consistency and diversity, MirrorGAN BIBREF60 employs a mirror structure, which reversely learns from generated images to output texts (an image-to-text process) to further validate whether generated are indeed consistent to the input texts. MirrowGAN includes three modules: a semantic text embedding module (STEM), a global-local collaborative attentive module for cascaded image generation (GLAM), and a semantic text regeneration and alignment module (STREAM). The back to back Text-to-Image (T2I) and Image-to-Text (I2T) are combined to progressively enhance the diversity and semantic consistency of the generated images.
black In order to enhance the diversity of the output image, Scene Graph GAN BIBREF61 proposes to use visual scene graphs to describe the layout of the objects, allowing users to precisely specific the relationships between objects in the images. In order to convert the visual scene graph as input for GAN to generate images, this method uses graph convolution to process input graphs. It computes a scene layout by predicting bounding boxes and segmentation masks for objects. After that, it converts the computed layout to an image with a cascaded refinement network.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs
Instead of focusing on generating static images, another line of text-to-image synthesis research focuses on generating videos (i.e. sequences of images) from texts. In this context, the synthesised videos are often useful resources for automated assistance or story telling.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs ::: ObamaNet and T2S
One early/interesting work of motion enhancement GANs is to generate spoofed speech and lip-sync videos (or talking face) of Barack Obama (i.e. ObamaNet) based on text input BIBREF62. This framework is consisted of three parts, i.e. text to speech using “Char2Wav”, mouth shape representation synced to the audio using a time-delayed LSTM and “video generation” conditioned on the mouth shape using “U-Net” architecture. Although the results seem promising, ObamaNet only models the mouth region and the videos are not generated from noise which can be regarded as video prediction other than video generation.
black Another meaningful trial of using synthesised videos for automated assistance is to translate spoken language (e.g. text) into sign language video sequences (i.e. T2S) BIBREF63. This is often achieved through a two step process: converting texts as meaningful units to generate images, followed by a learning component to arrange images into sequential order for best representation. More specifically, using RNN based machine translation methods, texts are translated into sign language gloss sequences. Then, glosses are mapped to skeletal pose sequences using a lookup-table. To generate videos, a conditional DCGAN with the input of concatenation of latent representation of the image for a base pose and skeletal pose information is built.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs ::: T2V
In BIBREF64, a text-to-video model (T2V) is proposed based on the cGAN in which the input is the isometric Gaussian noise with the text-gist vector served as the generator. A key component of generating videos from text is to train a conditional generative model to extract both static and dynamic information from text, followed by a hybrid framework combining a Variational Autoencoder (VAE) and a Generative Adversarial Network (GAN).
black More specifically, T2V relies on two types of features, static features and dynamic features, to generate videos. Static features, called “gist” are used to sketch text-conditioned background color and object layout structure. Dynamic features, on the other hand, are considered by transforming input text into an image filter which eventually forms the video generator which consists of three entangled neural networks. The text-gist vector is generated by a gist generator which maintains static information (e.g. background) and a text2filter which captures the dynamic information (i.e. actions) in the text to generate videos.
black As demonstrated in the paper BIBREF64, the generated videos are semantically related to the texts, but have a rather low quality (e.g. only $64 \times 64$ resolution).
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs ::: StoryGAN
Different from T2V which generates videos from a single text, StoryGAN aims to produce dynamic scenes consistent of specified texts (i.e. story written in a multi-sentence paragraph) using a sequential GAN model BIBREF65. Story encoder, context encoder, and discriminators are the main components of this model. By using stochastic sampling, the story encoder intends to learn an low-dimensional embedding vector for the whole story to keep the continuity of the story. The context encoder is proposed to capture contextual information during sequential image generation based on a deep RNN. Two discriminators of StoryGAN are image discriminator which evaluates the generated images and story discriminator which ensures the global consistency.
black The experiments and comparisons, on CLEVR dataset and Pororo cartoon dataset which are originally used for visual question answering, show that StoryGAN improves the generated video qualify in terms of Structural Similarity Index (SSIM), visual qualify, consistence, and relevance (the last three measure are based on human evaluation).
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Text-to-image Synthesis Applications
Computer vision applications have strong potential for industries including but not limited to the medical, government, military, entertainment, and online social media fields BIBREF7, BIBREF66, BIBREF67, BIBREF68, BIBREF69, BIBREF70. Text-to-image synthesis is one such application in computer vision AI that has become the main focus in recent years due to its potential for providing beneficial properties and opportunities for a wide range of applicable areas.
Text-to-image synthesis is an application byproduct of deep convolutional decoder networks in combination with GANs BIBREF7, BIBREF8, BIBREF10. Deep convolutional networks have contributed to several breakthroughs in image, video, speech, and audio processing. This learning method intends, among other possibilities, to help translate sequential text descriptions to images supplemented by one or many additional methods. Algorithms and methods developed in the computer vision field have allowed researchers in recent years to create realistic images from plain sentences. Advances in the computer vision, deep convolutional nets, and semantic units have shined light and redirected focus to this research area of text-to-image synthesis, having as its prime directive: to aid in the generation of compelling images with as much fidelity to text descriptions as possible.
To date, models for generating synthetic images from textual natural language in research laboratories at universities and private companies have yielded compelling images of flowers and birds BIBREF8. Though flowers and birds are the most common objects studied thus far, research has been applied to other classes as well. For example, there have been studies focused solely on human faces BIBREF7, BIBREF8, BIBREF71, BIBREF72.
It’s a fascinating time for computer vision AI and deep learning researchers and enthusiasts. The consistent advancement in hardware, software, and contemporaneous development of computer vision AI research disrupts multiple industries. These advances in technology allow for the extraction of several data types from a variety of sources. For example, image data captured from a variety of photo-ready devices, such as smart-phones, and online social media services opened the door to the analysis of large amounts of media datasets BIBREF70. The availability of large media datasets allow new frameworks and algorithms to be proposed and tested on real-world data.
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Text-to-image Synthesis Benchmark Datasets
A summary of some reviewed methods and benchmark datasets used for validation is reported in Table TABREF43. In addition, the performance of different GANs with respect to the benchmark datasets and performance metrics is reported in Table TABREF48.
In order to synthesize images from text descriptions, many frameworks have taken a minimalistic approach by creating small and background-less images BIBREF73. In most cases, the experiments were conducted on simple datasets, initially containing images of birds and flowers. BIBREF8 contributed to these data sets by adding corresponding natural language text descriptions to subsets of the CUB, MSCOCO, and Oxford-102 datasets, which facilitated the work on text-to-image synthesis for several papers released more recently.
While most deep learning algorithms use MNIST BIBREF74 dataset as the benchmark, there are three main datasets that are commonly used for evaluation of proposed GAN models for text-to-image synthesis: CUB BIBREF75, Oxford BIBREF76, COCO BIBREF77, and CIFAR-10 BIBREF78. CUB BIBREF75 contains 200 birds with matching text descriptions and Oxford BIBREF76 contains 102 categories of flowers with 40-258 images each and matching text descriptions. These datasets contain individual objects, with the text description corresponding to that object, making them relatively simple. COCO BIBREF77 is much more complex, containing 328k images with 91 different object types. CIFAI-10 BIBREF78 dataset consists of 60000 32$times$32 colour images in 10 classes, with 6000 images per class. In contrast to CUB and Oxford, whose images each contain an individual object, COCO’s images may contain multiple objects, each with a label, so there are many labels per image. The total number of labels over the 328k images is 2.5 million BIBREF77.
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Text-to-image Synthesis Benchmark Evaluation Metrics
Several evaluation metrics are used for judging the images produced by text-to-image GANs. Proposed by BIBREF25, Inception Scores (IS) calculates the entropy (randomness) of the conditional distribution, obtained by applying the Inception Model introduced in BIBREF79, and marginal distribution of a large set of generated images, which should be low and high, respectively, for meaningful images. Low entropy of conditional distribution means that the evaluator is confident that the images came from the data distribution, and high entropy of the marginal distribution means that the set of generated images is diverse, which are both desired features. The IS score is then computed as the KL-divergence between the two entropies. FCN-scores BIBREF2 are computed in a similar manner, relying on the intuition that realistic images generated by a GAN should be able to be classified correctly by a classifier trained on real images of the same distribution. Therefore, if the FCN classifier classifies a set of synthetic images accurately, the image is probably realistic, and the corresponding GAN gets a high FCN score. Frechet Inception Distance (FID) BIBREF80 is the other commonly used evaluation metric, and takes a different approach, actually comparing the generated images to real images in the distribution. A high FID means there is little relationship between statistics of the synthetic and real images and vice versa, so lower FIDs are better.
black The performance of different GANs with respect to the benchmark datasets and performance metrics is reported in Table TABREF48. In addition, Figure FIGREF49 further lists the performance of 14 GANs with respect to their Inception Scores (IS).
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: GAN Based Text-to-image Synthesis Results Comparison
While we gathered all the data we could find on scores for each model on the CUB, Oxford, and COCO datasets using IS, FID, FCN, and human classifiers, we unfortunately were unable to find certain data for AttnGAN and HDGAN (missing in Table TABREF48). The best evaluation we can give for those with missing data is our own opinions by looking at examples of generated images provided in their papers. In this regard, we observed that HDGAN produced relatively better visual results on the CUB and Oxford datasets while AttnGAN produced far more impressive results than the rest on the more complex COCO dataset. This is evidence that the attentional model and DAMSM introduced by AttnGAN are very effective in producing high-quality images. Examples of the best results of birds and plates of vegetables generated by each model are presented in Figures FIGREF50 and FIGREF51, respectively.
blackIn terms of inception score (IS), which is the metric that was applied to majority models except DC-GAN, the results in Table TABREF48 show that StackGAN++ only showed slight improvement over its predecessor, StackGAN, for text-to-image synthesis. However, StackGAN++ did introduce a very worthy enhancement for unconditional image generation by organizing the generators and discriminators in a “tree-like” structure. This indicates that revising the structures of the discriminators and/or generators can bring a moderate level of improvement in text-to-image synthesis.
blackIn addition, the results in Table TABREF48 also show that DM-GAN BIBREF53 has the best performance, followed by Obj-GAN BIBREF81. Notice that both DM-GAN and Obj-GAN are most recently developed methods in the field (both published in 2019), indicating that research in text to image synthesis is continuously improving the results for better visual perception and interception. Technical wise, DM-GAN BIBREF53 is a model using dynamic memory to refine fuzzy image contents initially generated from the GAN networks. A memory writing gate is used for DM-GAN to select important text information and generate images based on he selected text accordingly. On the other hand, Obj-GAN BIBREF81 focuses on object centered text-to-image synthesis. The proposed framework of Obj-GAN consists of a layout generation, including a bounding box generator and a shape generator, and an object-driven attentive image generator. The designs and advancement of DM-GAN and Obj-GAN indicate that research in text-to-image synthesis is advancing to put more emphasis on the image details and text semantics for better understanding and perception.
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Notable Mentions
It is worth noting that although this survey mainly focuses on text-to-image synthesis, there have been other applications of GANs in broader image synthesis field that we found fascinating and worth dedicating a small section to. For example, BIBREF72 used Sem-Latent GANs to generate images of faces based on facial attributes, producing impressive results that, at a glance, could be mistaken for real faces. BIBREF82, BIBREF70, and BIBREF83 demonstrated great success in generating text descriptions from images (image captioning) with great accuracy, with BIBREF82 using an attention-based model that automatically learns to focus on salient objects and BIBREF83 using deep visual-semantic alignments. Finally, there is a contribution made by StackGAN++ that was not mentioned in the dedicated section due to its relation to unconditional image generation as opposed to conditional, namely a color-regularization term BIBREF47. This additional term aims to keep the samples generated from the same input at different stages more consistent in color, which resulted in significantly better results for the unconditional model.
Conclusion
The recent advancement in text-to-image synthesis research opens the door to several compelling methods and architectures. The main objective of text-to-image synthesis initially was to create images from simple labels, and this objective later scaled to natural languages. In this paper, we reviewed novel methods that generate, in our opinion, the most visually-rich and photo-realistic images, from text-based natural language. These generated images often rely on generative adversarial networks (GANs), deep convolutional decoder networks, and multimodal learning methods.
blackIn the paper, we first proposed a taxonomy to organize GAN based text-to-image synthesis frameworks into four major groups: semantic enhancement GANs, resolution enhancement GANs, diversity enhancement GANs, and motion enhancement GANs. The taxonomy provides a clear roadmap to show the motivations, architectures, and difference of different methods, and also outlines their evolution timeline and relationships. Following the proposed taxonomy, we reviewed important features of each method and their architectures. We indicated the model definition and key contributions from some advanced GAN framworks, including StackGAN, StackGAN++, AttnGAN, DC-GAN, AC-GAN, TAC-GAN, HDGAN, Text-SeGAn, StoryGAN etc. Many of the solutions surveyed in this paper tackled the highly complex challenge of generating photo-realistic images beyond swatch size samples. In other words, beyond the work of BIBREF8 in which images were generated from text in 64$\times $64 tiny swatches. Lastly, all methods were evaluated on datasets that included birds, flowers, humans, and other miscellaneous elements. We were also able to allocate some important papers that were as impressive as the papers we finally surveyed. Though, these notable papers have yet to contribute directly or indirectly to the expansion of the vast computer vision AI field. Looking into the future, an excellent extension from the works surveyed in this paper would be to give more independence to the several learning methods (e.g. less human intervention) involved in the studies as well as increasing the size of the output images.
conflict of interest
The authors declare that there is no conflict of interest regarding the publication of this article. | unsupervised |
e96adf8466e67bd19f345578d5a6dc68fd0279a1 | e96adf8466e67bd19f345578d5a6dc68fd0279a1_1 | Q: Is text-to-image synthesis trained is suppervized or unsuppervized manner?
Text: Introduction
“ (GANs), and the variations that are now being proposed is the most interesting idea in the last 10 years in ML, in my opinion.” (2016)
– Yann LeCun
A picture is worth a thousand words! While written text provide efficient, effective, and concise ways for communication, visual content, such as images, is a more comprehensive, accurate, and intelligible method of information sharing and understanding. Generation of images from text descriptions, i.e. text-to-image synthesis, is a complex computer vision and machine learning problem that has seen great progress over recent years. Automatic image generation from natural language may allow users to describe visual elements through visually-rich text descriptions. The ability to do so effectively is highly desirable as it could be used in artificial intelligence applications such as computer-aided design, image editing BIBREF0, BIBREF1, game engines for the development of the next generation of video gamesBIBREF2, and pictorial art generation BIBREF3.
Introduction ::: blackTraditional Learning Based Text-to-image Synthesis
In the early stages of research, text-to-image synthesis was mainly carried out through a search and supervised learning combined process BIBREF4, as shown in Figure FIGREF4. In order to connect text descriptions to images, one could use correlation between keywords (or keyphrase) & images that identifies informative and “picturable” text units; then, these units would search for the most likely image parts conditioned on the text, eventually optimizing the picture layout conditioned on both the text and the image parts. Such methods often integrated multiple artificial intelligence key components, including natural language processing, computer vision, computer graphics, and machine learning.
The major limitation of the traditional learning based text-to-image synthesis approaches is that they lack the ability to generate new image content; they can only change the characteristics of the given/training images. Alternatively, research in generative models has advanced significantly and delivers solutions to learn from training images and produce new visual content. For example, Attribute2Image BIBREF5 models each image as a composite of foreground and background. In addition, a layered generative model with disentangled latent variables is learned, using a variational auto-encoder, to generate visual content. Because the learning is customized/conditioned by given attributes, the generative models of Attribute2Image can generate images with respect to different attributes, such as gender, hair color, age, etc., as shown in Figure FIGREF5.
Introduction ::: GAN Based Text-to-image Synthesis
Although generative model based text-to-image synthesis provides much more realistic image synthesis results, the image generation is still conditioned by the limited attributes. In recent years, several papers have been published on the subject of text-to-image synthesis. Most of the contributions from these papers rely on multimodal learning approaches that include generative adversarial networks and deep convolutional decoder networks as their main drivers to generate entrancing images from text BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11.
First introduced by Ian Goodfellow et al. BIBREF9, generative adversarial networks (GANs) consist of two neural networks paired with a discriminator and a generator. These two models compete with one another, with the generator attempting to produce synthetic/fake samples that will fool the discriminator and the discriminator attempting to differentiate between real (genuine) and synthetic samples. Because GANs' adversarial training aims to cause generators to produce images similar to the real (training) images, GANs can naturally be used to generate synthetic images (image synthesis), and this process can even be customized further by using text descriptions to specify the types of images to generate, as shown in Figure FIGREF6.
Much like text-to-speech and speech-to-text conversion, there exists a wide variety of problems that text-to-image synthesis could solve in the computer vision field specifically BIBREF8, BIBREF12. Nowadays, researchers are attempting to solve a plethora of computer vision problems with the aid of deep convolutional networks, generative adversarial networks, and a combination of multiple methods, often called multimodal learning methods BIBREF8. For simplicity, multiple learning methods will be referred to as multimodal learning hereafter BIBREF13. Researchers often describe multimodal learning as a method that incorporates characteristics from several methods, algorithms, and ideas. This can include ideas from two or more learning approaches in order to create a robust implementation to solve an uncommon problem or improve a solution BIBREF8, BIBREF14, BIBREF15, BIBREF16, BIBREF17.
black In this survey, we focus primarily on reviewing recent works that aim to solve the challenge of text-to-image synthesis using generative adversarial networks (GANs). In order to provide a clear roadmap, we propose a taxonomy to summarize reviewed GANs into four major categories. Our review will elaborate the motivations of methods in each category, analyze typical models, their network architectures, and possible drawbacks for further improvement. The visual abstract of the survey and the list of reviewed GAN frameworks is shown in Figure FIGREF8.
black The remainder of the survey is organized as follows. Section 2 presents a brief summary of existing works on subjects similar to that of this paper and highlights the key distinctions making ours unique. Section 3 gives a short introduction to GANs and some preliminary concepts related to image generation, as they are the engines that make text-to-image synthesis possible and are essential building blocks to achieve photo-realistic images from text descriptions. Section 4 proposes a taxonomy to summarize GAN based text-to-image synthesis, discusses models and architectures of novel works focused solely on text-to-image synthesis. This section will also draw key contributions from these works in relation to their applications. Section 5 reviews GAN based text-to-image synthesis benchmarks, performance metrics, and comparisons, including a simple review of GANs for other applications. In section 6, we conclude with a brief summary and outline ideas for future interesting developments in the field of text-to-image synthesis.
Related Work
With the growth and success of GANs, deep convolutional decoder networks, and multimodal learning methods, these techniques were some of the first procedures which aimed to solve the challenge of image synthesis. Many engineers and scientists in computer vision and AI have contributed through extensive studies and experiments, with numerous proposals and publications detailing their contributions. Because GANs, introduced by BIBREF9, are emerging research topics, their practical applications to image synthesis are still in their infancy. Recently, many new GAN architectures and designs have been proposed to use GANs for different applications, e.g. using GANs to generate sentimental texts BIBREF18, or using GANs to transform natural images into cartoons BIBREF19.
Although GANs are becoming increasingly popular, very few survey papers currently exist to summarize and outline contemporaneous technical innovations and contributions of different GAN architectures BIBREF20, BIBREF21. Survey papers specifically attuned to analyzing different contributions to text-to-image synthesis using GANs are even more scarce. We have thus found two surveys BIBREF6, BIBREF7 on image synthesis using GANs, which are the two most closely related publications to our survey objective. In the following paragraphs, we briefly summarize each of these surveys and point out how our objectives differ from theirs.
In BIBREF6, the authors provide an overview of image synthesis using GANs. In this survey, the authors discuss the motivations for research on image synthesis and introduce some background information on the history of GANs, including a section dedicated to core concepts of GANs, namely generators, discriminators, and the min-max game analogy, and some enhancements to the original GAN model, such as conditional GANs, addition of variational auto-encoders, etc.. In this survey, we will carry out a similar review of the background knowledge because the understanding of these preliminary concepts is paramount for the rest of the paper. Three types of approaches for image generation are reviewed, including direct methods (single generator and discriminator), hierarchical methods (two or more generator-discriminator pairs, each with a different goal), and iterative methods (each generator-discriminator pair generates a gradually higher-resolution image). Following the introduction, BIBREF6 discusses methods for text-to-image and image-to-image synthesis, respectively, and also describes several evaluation metrics for synthetic images, including inception scores and Frechet Inception Distance (FID), and explains the significance of the discriminators acting as learned loss functions as opposed to fixed loss functions.
Different from the above survey, which has a relatively broad scope in GANs, our objective is heavily focused on text-to-image synthesis. Although this topic, text-to-image synthesis, has indeed been covered in BIBREF6, they did so in a much less detailed fashion, mostly listing the many different works in a time-sequential order. In comparison, we will review several representative methods in the field and outline their models and contributions in detail.
Similarly to BIBREF6, the second survey paper BIBREF7 begins with a standard introduction addressing the motivation of image synthesis and the challenges it presents followed by a section dedicated to core concepts of GANs and enhancements to the original GAN model. In addition, the paper covers the review of two types of applications: (1) unconstrained applications of image synthesis such as super-resolution, image inpainting, etc., and (2) constrained image synthesis applications, namely image-to-image, text-to-image, and sketch-to image, and also discusses image and video editing using GANs. Again, the scope of this paper is intrinsically comprehensive, while we focus specifically on text-to-image and go into more detail regarding the contributions of novel state-of-the-art models.
Other surveys have been published on related matters, mainly related to the advancements and applications of GANs BIBREF22, BIBREF23, but we have not found any prior works which focus specifically on text-to-image synthesis using GANs. To our knowledge, this is the first paper to do so.
black
Preliminaries and Frameworks
In this section, we first introduce preliminary knowledge of GANs and one of its commonly used variants, conditional GAN (i.e. cGAN), which is the building block for many GAN based text-to-image synthesis models. After that, we briefly separate GAN based text-to-image synthesis into two types, Simple GAN frameworks vs. Advanced GAN frameworks, and discuss why advanced GAN architecture for image synthesis.
black Notice that the simple vs. advanced GAN framework separation is rather too brief, our taxonomy in the next section will propose a taxonomy to summarize advanced GAN frameworks into four categories, based on their objective and designs.
Preliminaries and Frameworks ::: Generative Adversarial Neural Network
Before moving on to a discussion and analysis of works applying GANs for text-to-image synthesis, there are some preliminary concepts, enhancements of GANs, datasets, and evaluation metrics that are present in some of the works described in the next section and are thus worth introducing.
As stated previously, GANs were introduced by Ian Goodfellow et al. BIBREF9 in 2014, and consist of two deep neural networks, a generator and a discriminator, which are trained independently with conflicting goals: The generator aims to generate samples closely related to the original data distribution and fool the discriminator, while the discriminator aims to distinguish between samples from the generator model and samples from the true data distribution by calculating the probability of the sample coming from either source. A conceptual view of the generative adversarial network (GAN) architecture is shown in Figure FIGREF11.
The training of GANs is an iterative process that, with each iteration, updates the generator and the discriminator with the goal of each defeating the other. leading each model to become increasingly adept at its specific task until a threshold is reached. This is analogous to a min-max game between the two models, according to the following equation:
In Eq. (DISPLAY_FORM10), $x$ denotes a multi-dimensional sample, e.g., an image, and $z$ denotes a multi-dimensional latent space vector, e.g., a multidimensional data point following a predefined distribution function such as that of normal distributions. $D_{\theta _d}()$ denotes a discriminator function, controlled by parameters $\theta _d$, which aims to classify a sample into a binary space. $G_{\theta _g}()$ denotes a generator function, controlled by parameters $\theta _g$, which aims to generate a sample from some latent space vector. For example, $G_{\theta _g}(z)$ means using a latent vector $z$ to generate a synthetic/fake image, and $D_{\theta _d}(x)$ means to classify an image $x$ as binary output (i.e. true/false or 1/0). In the GAN setting, the discriminator $D_{\theta _d}()$ is learned to distinguish a genuine/true image (labeled as 1) from fake images (labeled as 0). Therefore, given a true image $x$, the ideal output from the discriminator $D_{\theta _d}(x)$ would be 1. Given a fake image generated from the generator $G_{\theta _g}(z)$, the ideal prediction from the discriminator $D_{\theta _d}(G_{\theta _g}(z))$ would be 0, indicating the sample is a fake image.
Following the above definition, the $\min \max $ objective function in Eq. (DISPLAY_FORM10) aims to learn parameters for the discriminator ($\theta _d$) and generator ($\theta _g$) to reach an optimization goal: The discriminator intends to differentiate true vs. fake images with maximum capability $\max _{\theta _d}$ whereas the generator intends to minimize the difference between a fake image vs. a true image $\min _{\theta _g}$. In other words, the discriminator sets the characteristics and the generator produces elements, often images, iteratively until it meets the attributes set forth by the discriminator. GANs are often used with images and other visual elements and are notoriously efficient in generating compelling and convincing photorealistic images. Most recently, GANs were used to generate an original painting in an unsupervised fashion BIBREF24. The following sections go into further detail regarding how the generator and discriminator are trained in GANs.
Generator - In image synthesis, the generator network can be thought of as a mapping from one representation space (latent space) to another (actual data) BIBREF21. When it comes to image synthesis, all of the images in the data space fall into some distribution in a very complex and high-dimensional feature space. Sampling from such a complex space is very difficult, so GANs instead train a generator to create synthetic images from a much more simple feature space (usually random noise) called the latent space. The generator network performs up-sampling of the latent space and is usually a deep neural network consisting of several convolutional and/or fully connected layers BIBREF21. The generator is trained using gradient descent to update the weights of the generator network with the aim of producing data (in our case, images) that the discriminator classifies as real.
Discriminator - The discriminator network can be thought of as a mapping from image data to the probability of the image coming from the real data space, and is also generally a deep neural network consisting of several convolution and/or fully connected layers. However, the discriminator performs down-sampling as opposed to up-sampling. Like the generator, it is trained using gradient descent but its goal is to update the weights so that it is more likely to correctly classify images as real or fake.
In GANs, the ideal outcome is for both the generator's and discriminator's cost functions to converge so that the generator produces photo-realistic images that are indistinguishable from real data, and the discriminator at the same time becomes an expert at differentiating between real and synthetic data. This, however, is not possible since a reduction in cost of one model generally leads to an increase in cost of the other. This phenomenon makes training GANs very difficult, and training them simultaneously (both models performing gradient descent in parallel) often leads to a stable orbit where neither model is able to converge. To combat this, the generator and discriminator are often trained independently. In this case, the GAN remains the same, but there are different training stages. In one stage, the weights of the generator are kept constant and gradient descent updates the weights of the discriminator, and in the other stage the weights of the discriminator are kept constant while gradient descent updates the weights of the generator. This is repeated for some number of epochs until a desired low cost for each model is reached BIBREF25.
Preliminaries and Frameworks ::: cGAN: Conditional GAN
Conditional Generative Adversarial Networks (cGAN) are an enhancement of GANs proposed by BIBREF26 shortly after the introduction of GANs by BIBREF9. The objective function of the cGAN is defined in Eq. (DISPLAY_FORM13) which is very similar to the GAN objective function in Eq. (DISPLAY_FORM10) except that the inputs to both discriminator and generator are conditioned by a class label $y$.
The main technical innovation of cGAN is that it introduces an additional input or inputs to the original GAN model, allowing the model to be trained on information such as class labels or other conditioning variables as well as the samples themselves, concurrently. Whereas the original GAN was trained only with samples from the data distribution, resulting in the generated sample reflecting the general data distribution, cGAN enables directing the model to generate more tailored outputs.
In Figure FIGREF14, the condition vector is the class label (text string) "Red bird", which is fed to both the generator and discriminator. It is important, however, that the condition vector is related to the real data. If the model in Figure FIGREF14 was trained with the same set of real data (red birds) but the condition text was "Yellow fish", the generator would learn to create images of red birds when conditioned with the text "Yellow fish".
Note that the condition vector in cGAN can come in many forms, such as texts, not just limited to the class label. Such a unique design provides a direct solution to generate images conditioned by predefined specifications. As a result, cGAN has been used in text-to-image synthesis since the very first day of its invention although modern approaches can deliver much better text-to-image synthesis results.
black
Preliminaries and Frameworks ::: Simple GAN Frameworks for Text-to-Image Synthesis
In order to generate images from text, one simple solution is to employ the conditional GAN (cGAN) designs and add conditions to the training samples, such that the GAN is trained with respect to the underlying conditions. Several pioneer works have followed similar designs for text-to-image synthesis.
black An essential disadvantage of using cGAN for text-to-image synthesis is that that it cannot handle complicated textual descriptions for image generation, because cGAN uses labels as conditions to restrict the GAN inputs. If the text inputs have multiple keywords (or long text descriptions) they cannot be used simultaneously to restrict the input. Instead of using text as conditions, another two approaches BIBREF8, BIBREF16 use text as input features, and concatenate such features with other features to train discriminator and generator, as shown in Figure FIGREF15(b) and (c). To ensure text being used as GAN input, a feature embedding or feature representation learning BIBREF29, BIBREF30 function $\varphi ()$ is often introduced to convert input text as numeric features, which are further concatenated with other features to train GANs.
black
Preliminaries and Frameworks ::: Advanced GAN Frameworks for Text-to-Image Synthesis
Motivated by the GAN and conditional GAN (cGAN) design, many GAN based frameworks have been proposed to generate images, with different designs and architectures, such as using multiple discriminators, using progressively trained discriminators, or using hierarchical discriminators. Figure FIGREF17 outlines several advanced GAN frameworks in the literature. In addition to these frameworks, many news designs are being proposed to advance the field with rather sophisticated designs. For example, a recent work BIBREF37 proposes to use a pyramid generator and three independent discriminators, blackeach focusing on a different aspect of the images, to lead the generator towards creating images that are photo-realistic on multiple levels. Another recent publication BIBREF38 proposes to use discriminator to measure semantic relevance between image and text instead of class prediction (like most discriminator in GANs does), resulting a new GAN structure outperforming text conditioned auxiliary classifier (TAC-GAN) BIBREF16 and generating diverse, realistic, and relevant to the input text regardless of class.
black In the following section, we will first propose a taxonomy that summarizes advanced GAN frameworks for text-to-image synthesis, and review most recent proposed solutions to the challenge of generating photo-realistic images conditioned on natural language text descriptions using GANs. The solutions we discuss are selected based on relevance and quality of contributions. Many publications exist on the subject of image-generation using GANs, but in this paper we focus specifically on models for text-to-image synthesis, with the review emphasizing on the “model” and “contributions” for text-to-image synthesis. At the end of this section, we also briefly review methods using GANs for other image-synthesis applications.
black
Text-to-Image Synthesis Taxonomy and Categorization
In this section, we propose a taxonomy to summarize advanced GAN based text-to-image synthesis frameworks, as shown in Figure FIGREF24. The taxonomy organizes GAN frameworks into four categories, including Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANs, and Motion Enhancement GAGs. Following the proposed taxonomy, each subsection will introduce several typical frameworks and address their techniques of using GANS to solve certain aspects of the text-to-mage synthesis challenges.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: GAN based Text-to-Image Synthesis Taxonomy
Although the ultimate goal of Text-to-Image synthesis is to generate images closely related to the textual descriptions, the relevance of the images to the texts are often validated from different perspectives, due to the inherent diversity of human perceptions. For example, when generating images matching to the description “rose flowers”, some users many know the exact type of flowers they like and intend to generate rose flowers with similar colors. Other users, may seek to generate high quality rose flowers with a nice background (e.g. garden). The third group of users may be more interested in generating flowers similar to rose but with different colors and visual appearance, e.g. roses, begonia, and peony. The fourth group of users may want to not only generate flower images, but also use them to form a meaningful action, e.g. a video clip showing flower growth, performing a magic show using those flowers, or telling a love story using the flowers.
blackFrom the text-to-Image synthesis point of view, the first group of users intend to precisely control the semantic of the generated images, and their goal is to match the texts and images at the semantic level. The second group of users are more focused on the resolutions and the qualify of the images, in addition to the requirement that the images and texts are semantically related. For the third group of users, their goal is to diversify the output images, such that their images carry diversified visual appearances and are also semantically related. The fourth user group adds a new dimension in image synthesis, and aims to generate sequences of images which are coherent in temporal order, i.e. capture the motion information.
black Based on the above descriptions, we categorize GAN based Text-to-Image Synthesis into a taxonomy with four major categories, as shown in Fig. FIGREF24.
Semantic Enhancement GANs: Semantic enhancement GANs represent pioneer works of GAN frameworks for text-to-image synthesis. The main focus of the GAN frameworks is to ensure that the generated images are semantically related to the input texts. This objective is mainly achieved by using a neural network to encode texts as dense features, which are further fed to a second network to generate images matching to the texts.
Resolution Enhancement GANs: Resolution enhancement GANs mainly focus on generating high qualify images which are semantically matched to the texts. This is mainly achieved through a multi-stage GAN framework, where the outputs from earlier stage GANs are fed to the second (or later) stage GAN to generate better qualify images.
Diversity Enhancement GANs: Diversity enhancement GANs intend to diversify the output images, such that the generated images are not only semantically related but also have different types and visual appearance. This objective is mainly achieved through an additional component to estimate semantic relevance between generated images and texts, in order to maximize the output diversity.
Motion Enhancement GANs: Motion enhancement GANs intend to add a temporal dimension to the output images, such that they can form meaningful actions with respect to the text descriptions. This goal mainly achieved though a two-step process which first generates images matching to the “actions” of the texts, followed by a mapping or alignment procedure to ensure that images are coherent in the temporal order.
black In the following, we will introduce how these GAN frameworks evolve for text-to-image synthesis, and will also review some typical methods of each category.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs
Semantic relevance is one the of most important criteria of the text-to-image synthesis. For most GNAs discussed in this survey, they are required to generate images semantically related to the text descriptions. However, the semantic relevance is a rather subjective measure, and images are inherently rich in terms of its semantics and interpretations. Therefore, many GANs are further proposed to enhance the text-to-image synthesis from different perspectives. In this subsection, we will review several classical approaches which are commonly served as text-to-image synthesis baseline.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs ::: DC-GAN
Deep convolution generative adversarial network (DC-GAN) BIBREF8 represents the pioneer work for text-to-image synthesis using GANs. Its main goal is to train a deep convolutional generative adversarial network (DC-GAN) on text features. During this process these text features are encoded by another neural network. This neural network is a hybrid convolutional recurrent network at the character level. Concurrently, both neural networks have also feed-forward inference in the way they condition text features. Generating realistic images automatically from natural language text is the motivation of several of the works proposed in this computer vision field. However, actual artificial intelligence (AI) systems are far from achieving this task BIBREF8, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. Lately, recurrent neural networks led the way to develop frameworks that learn discriminatively on text features. At the same time, generative adversarial networks (GANs) began recently to show some promise on generating compelling images of a whole host of elements including but not limited to faces, birds, flowers, and non-common images such as room interiorsBIBREF8. DC-GAN is a multimodal learning model that attempts to bridge together both of the above mentioned unsupervised machine learning algorithms, the recurrent neural networks (RNN) and generative adversarial networks (GANs), with the sole purpose of speeding the generation of text-to-image synthesis.
black Deep learning shed some light to some of the most sophisticated advances in natural language representation, image synthesis BIBREF7, BIBREF8, BIBREF43, BIBREF35, and classification of generic data BIBREF44. However, a bulk of the latest breakthroughs in deep learning and computer vision were related to supervised learning BIBREF8. Even though natural language and image synthesis were part of several contributions on the supervised side of deep learning, unsupervised learning saw recently a tremendous rise in input from the research community specially on two subproblems: text-based natural language and image synthesis BIBREF45, BIBREF14, BIBREF8, BIBREF46, BIBREF47. These subproblems are typically subdivided as focused research areas. DC-GAN's contributions are mainly driven by these two research areas. In order to generate plausible images from natural language, DC-GAN contributions revolve around developing a straightforward yet effective GAN architecture and training strategy that allows natural text to image synthesis. These contributions are primarily tested on the Caltech-UCSD Birds and Oxford-102 Flowers datasets. Each image in these datasets carry five text descriptions. These text descriptions were created by the research team when setting up the evaluation environment. The DC-GANs model is subsequently trained on several subcategories. Subcategories in this research represent the training and testing sub datasets. The performance shown by these experiments display a promising yet effective way to generate images from textual natural language descriptions BIBREF8.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs ::: DC-GAN Extensions
Following the pioneer DC-GAN framework BIBREF8, many researches propose revised network structures (e.g. different discriminaotrs) in order to improve images with better semantic relevance to the texts. Based on the deep convolutional adversarial network (DC-GAN) network architecture, GAN-CLS with image-text matching discriminator, GAN-INT learned with text manifold interpolation and GAN-INT-CLS which combines both are proposed to find semantic match between text and image. Similar to the DC-GAN architecture, an adaptive loss function (i.e. Perceptual Loss BIBREF48) is proposed for semantic image synthesis which can synthesize a realistic image that not only matches the target text description but also keep the irrelavant features(e.g. background) from source images BIBREF49. Regarding to the Perceptual Losses, three loss functions (i.e. Pixel reconstruction loss, Activation reconstruction loss and Texture reconstruction loss) are proposed in BIBREF50 in which they construct the network architectures based on the DC-GAN, i.e. GAN-INT-CLS-Pixel, GAN-INT-CLS-VGG and GAN-INT-CLS-Gram with respect to three losses. In BIBREF49, a residual transformation unit is added in the network to retain similar structure of the source image.
black Following the BIBREF49 and considering the features in early layers address background while foreground is obtained in latter layers in CNN, a pair of discriminators with different architectures (i.e. Paired-D GAN) is proposed to synthesize background and foreground from a source image seperately BIBREF51. Meanwhile, the skip-connection in the generator is employed to more precisely retain background information in the source image.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs ::: MC-GAN
When synthesising images, most text-to-image synthesis methods consider each output image as one single unit to characterize its semantic relevance to the texts. This is likely problematic because most images naturally consist of two crucial components: foreground and background. Without properly separating these two components, it's hard to characterize the semantics of an image if the whole image is treated as a single unit without proper separation.
black In order to enhance the semantic relevance of the images, a multi-conditional GAN (MC-GAN) BIBREF52 is proposed to synthesize a target image by combining the background of a source image and a text-described foreground object which does not exist in the source image. A unique feature of MC-GAN is that it proposes a synthesis block in which the background feature is extracted from the given image without non-linear function (i.e. only using convolution and batch normalization) and the foreground feature is the feature map from the previous layer.
black Because MC-GAN is able to properly model the background and foreground of the generated images, a unique strength of MC-GAN is that users are able to provide a base image and MC-GAN is able to preserve the background information of the base image to generate new images. black
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs
Due to the fact that training GANs will be much difficult when generating high-resolution images, a two stage GAN (i.e. stackGAN) is proposed in which rough images(i.e. low-resolution images) are generated in stage-I and refined in stage-II. To further improve the quality of generated images, the second version of StackGAN (i.e. Stack++) is proposed to use multi-stage GANs to generate multi-scale images. A color-consistency regularization term is also added into the loss to keep the consistency of images in different scales.
black While stackGAN and StackGAN++ are both built on the global sentence vector, AttnGAN is proposed to use attention mechanism (i.e. Deep Attentional Multimodal Similarity Model (DAMSM)) to model the multi-level information (i.e. word level and sentence level) into GANs. In the following, StackGAN, StackGAN++ and AttnGAN will be explained in detail.
black Recently, Dynamic Memory Generative Adversarial Network (i.e. DM-GAN)BIBREF53 which uses a dynamic memory component is proposed to focus on refiningthe initial generated image which is the key to the success of generating high quality images.
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: StackGAN
In 2017, Zhang et al. proposed a model for generating photo-realistic images from text descriptions called StackGAN (Stacked Generative Adversarial Network) BIBREF33. In their work, they define a two-stage model that uses two cascaded GANs, each corresponding to one of the stages. The stage I GAN takes a text description as input, converts the text description to a text embedding containing several conditioning variables, and generates a low-quality 64x64 image with rough shapes and colors based on the computed conditioning variables. The stage II GAN then takes this low-quality stage I image as well as the same text embedding and uses the conditioning variables to correct and add more detail to the stage I result. The output of stage II is a photorealistic 256$times$256 image that resembles the text description with compelling accuracy.
One major contribution of StackGAN is the use of cascaded GANs for text-to-image synthesis through a sketch-refinement process. By conditioning the stage II GAN on the image produced by the stage I GAN and text description, the stage II GAN is able to correct defects in the stage I output, resulting in high-quality 256x256 images. Prior works have utilized “stacked” GANs to separate the image generation process into structure and style BIBREF42, multiple stages each generating lower-level representations from higher-level representations of the previous stage BIBREF35, and multiple stages combined with a laplacian pyramid approach BIBREF54, which was introduced for image compression by P. Burt and E. Adelson in 1983 and uses the differences between consecutive down-samples of an original image to reconstruct the original image from its down-sampled version BIBREF55. However, these works did not use text descriptions to condition their generator models.
Conditioning Augmentation is the other major contribution of StackGAN. Prior works transformed the natural language text description into a fixed text embedding containing static conditioning variables which were fed to the generator BIBREF8. StackGAN does this and then creates a Gaussian distribution from the text embedding and randomly selects variables from the Gaussian distribution to add to the set of conditioning variables during training. This encourages robustness by introducing small variations to the original text embedding for a particular training image while keeping the training image that the generated output is compared to the same. The result is that the trained model produces more diverse images in the same distribution when using Conditioning Augmentation than the same model using a fixed text embedding BIBREF33.
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: StackGAN++
Proposed by the same users as StackGAN, StackGAN++ is also a stacked GAN model, but organizes the generators and discriminators in a “tree-like” structure BIBREF47 with multiple stages. The first stage combines a noise vector and conditioning variables (with Conditional Augmentation introduced in BIBREF33) for input to the first generator, which generates a low-resolution image, 64$\times $64 by default (this can be changed depending on the desired number of stages). Each following stage uses the result from the previous stage and the conditioning variables to produce gradually higher-resolution images. These stages do not use the noise vector again, as the creators assume that the randomness it introduces is already preserved in the output of the first stage. The final stage produces a 256$\times $256 high-quality image.
StackGAN++ introduces the joint conditional and unconditional approximation in their designs BIBREF47. The discriminators are trained to calculate the loss between the image produced by the generator and the conditioning variables (measuring how accurately the image represents the description) as well as the loss between the image and real images (probability of the image being real or fake). The generators then aim to minimize the sum of these losses, improving the final result.
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: AttnGAN
Attentional Generative Adversarial Network (AttnGAN) BIBREF10 is very similar, in terms of its structure, to StackGAN++ BIBREF47, discussed in the previous section, but some novel components are added. Like previous works BIBREF56, BIBREF8, BIBREF33, BIBREF47, a text encoder generates a text embedding with conditioning variables based on the overall sentence. Additionally, the text encoder generates a separate text embedding with conditioning variables based on individual words. This process is optimized to produce meaningful variables using a bidirectional recurrent neural network (BRNN), more specifically bidirectional Long Short Term Memory (LSTM) BIBREF57, which, for each word in the description, generates conditions based on the previous word as well as the next word (bidirectional). The first stage of AttnGAN generates a low-resolution image based on the sentence-level text embedding and random noise vector. The output is fed along with the word-level text embedding to an “attention model”, which matches the word-level conditioning variables to regions of the stage I image, producing a word-context matrix. This is then fed to the next stage of the model along with the raw previous stage output. Each consecutive stage works in the same manner, but produces gradually higher-resolution images conditioned on the previous stage.
Two major contributions were introduced in AttnGAN: the attentional generative network and the Deep Attentional Multimodal Similarity Model (DAMSM) BIBREF47. The attentional generative network matches specific regions of each stage's output image to conditioning variables from the word-level text embedding. This is a very worthy contribution, allowing each consecutive stage to focus on specific regions of the image independently, adding “attentional” details region by region as opposed to the whole image. The DAMSM is also a key feature introduced by AttnGAN, which is used after the result of the final stage to calculate the similarity between the generated image and the text embedding at both the sentence level and the more fine-grained word level. Table TABREF48 shows scores from different metrics for StackGAN, StackGAN++, AttnGAN, and HDGAN on the CUB, Oxford, and COCO datasets. The table shows that AttnGAN outperforms the other models in terms of IS on the CUB dataset by a small amount and greatly outperforms them on the COCO dataset.
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: HDGAN
Hierarchically-nested adversarial network (HDGAN) is a method proposed by BIBREF36, and its main objective is to tackle the difficult problem of dealing with photographic images from semantic text descriptions. These semantic text descriptions are applied on images from diverse datasets. This method introduces adversarial objectives nested inside hierarchically oriented networks BIBREF36. Hierarchical networks helps regularize mid-level manifestations. In addition to regularize mid-level manifestations, it assists the training of the generator in order to capture highly complex still media elements. These elements are captured in statistical order to train the generator based on settings extracted directly from the image. The latter is an ideal scenario. However, this paper aims to incorporate a single-stream architecture. This single-stream architecture functions as the generator that will form an optimum adaptability towards the jointed discriminators. Once jointed discriminators are setup in an optimum manner, the single-stream architecture will then advance generated images to achieve a much higher resolution BIBREF36.
The main contributions of the HDGANs include the introduction of a visual-semantic similarity measure BIBREF36. This feature will aid in the evaluation of the consistency of generated images. In addition to checking the consistency of generated images, one of the key objectives of this step is to test the logical consistency of the end product BIBREF36. The end product in this case would be images that are semantically mapped from text-based natural language descriptions to each area on the picture e.g. a wing on a bird or petal on a flower. Deep learning has created a multitude of opportunities and challenges for researchers in the computer vision AI field. Coupled with GAN and multimodal learning architectures, this field has seen tremendous growth BIBREF8, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. Based on these advancements, HDGANs attempt to further extend some desirable and less common features when generating images from textual natural language BIBREF36. In other words, it takes sentences and treats them as a hierarchical structure. This has some positive and negative implications in most cases. For starters, it makes it more complex to generate compelling images. However, one of the key benefits of this elaborate process is the realism obtained once all processes are completed. In addition, one common feature added to this process is the ability to identify parts of sentences with bounding boxes. If a sentence includes common characteristics of a bird, it will surround the attributes of such bird with bounding boxes. In practice, this should happen if the desired image have other elements such as human faces (e.g. eyes, hair, etc), flowers (e.g. petal size, color, etc), or any other inanimate object (e.g. a table, a mug, etc). Finally, HDGANs evaluated some of its claims on common ideal text-to-image datasets such as CUB, COCO, and Oxford-102 BIBREF8, BIBREF36, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. These datasets were first utilized on earlier works BIBREF8, and most of them sport modified features such image annotations, labels, or descriptions. The qualitative and quantitative results reported by researchers in this study were far superior of earlier works in this same field of computer vision AI.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs
In this subsection, we introduce text-to-image synthesis methods which try to maximize the diversity of the output images, based on the text descriptions.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: AC-GAN
Two issues arise in the traditional GANs BIBREF58 for image synthesis: (1) scalabilirty problem: traditional GANs cannot predict a large number of image categories; and (2) diversity problem: images are often subject to one-to-many mapping, so one image could be labeled as different tags or being described using different texts. To address these problems, GAN conditioned on additional information, e.g. cGAN, is an alternative solution. However, although cGAN and many previously introduced approaches are able to generate images with respect to the text descriptions, they often output images with similar types and visual appearance.
black Slightly different from the cGAN, auxiliary classifier GANs (AC-GAN) BIBREF27 proposes to improve the diversity of output images by using an auxiliary classifier to control output images. The overall structure of AC-GAN is shown in Fig. FIGREF15(c). In AC-GAN, every generated image is associated with a class label, in addition to the true/fake label which are commonly used in GAN or cGAN. The discriminator of AC-GAN not only outputs a probability distribution over sources (i.e. whether the image is true or fake), it also output a probability distribution over the class label (i.e. predict which class the image belong to).
black By using an auxiliary classifier layer to predict the class of the image, AC-GAN is able to use the predicted class labels of the images to ensure that the output consists of images from different classes, resulting in diversified synthesis images. The results show that AC-GAN can generate images with high diversity.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: TAC-GAN
Building on the AC-GAN, TAC-GAN BIBREF59 is proposed to replace the class information with textual descriptions as the input to perform the task of text to image synthesis. The architecture of TAC-GAN is shown in Fig. FIGREF15(d), which is similar to AC-GAN. Overall, the major difference between TAC-GAN and AC-GAN is that TAC-GAN conditions the generated images on text descriptions instead of on a class label. This design makes TAC-GAN more generic for image synthesis.
black For TAC-GAN, it imposes restrictions on generated images in both texts and class labels. The input vector of TAC-GAN's generative network is built based on a noise vector and embedded vector representation of textual descriptions. The discriminator of TAC-GAN is similar to that of the AC-GAN, which not only predicts whether the image is fake or not, but also predicts the label of the images. A minor difference of TAC-GAN's discriminator, compared to that of the AC-GAN, is that it also receives text information as input before performing its classification.
black The experiments and validations, on the Oxford-102 flowers dataset, show that the results produced by TAC-GAN are “slightly better” that other approaches, including GAN-INT-CLS and StackGAN.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: Text-SeGAN
In order to improve the diversity of the output images, both AC-GAN and TAC-GAN's discriminators predict class labels of the synthesised images. This process likely enforces the semantic diversity of the images, but class labels are inherently restrictive in describing image semantics, and images described by text can be matched to multiple labels. Therefore, instead of predicting images' class labels, an alternative solution is to directly quantify their semantic relevance.
black The architecture of Text-SeGAN is shown in Fig. FIGREF15(e). In order to directly quantify semantic relevance, Text-SeGAN BIBREF28 adds a regression layer to estimate the semantic relevance between the image and text instead of a classifier layer of predicting labels. The estimated semantic reference is a fractional value ranging between 0 and 1, with a higher value reflecting better semantic relevance between the image and text. Due to this unique design, an inherent advantage of Text-SeGAN is that the generated images are not limited to certain classes and are semantically matching to the text input.
black Experiments and validations, on Oxford-102 flower dataset, show that Text-SeGAN can generate diverse images that are semantically relevant to the input text. In addition, the results of Text-SeGAN show improved inception score compared to other approaches, including GAN-INT-CLS, StackGAN, TAC-GAN, and HDGAN.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: MirrorGAN and Scene Graph GAN
Due to the inherent complexity of the visual images, and the diversity of text descriptions (i.e. same words could imply different meanings), it is difficulty to precisely match the texts to the visual images at the semantic levels. For most methods we have discussed so far, they employ a direct text to image generation process, but there is no validation about how generated images comply with the text in a reverse fashion.
black To ensure the semantic consistency and diversity, MirrorGAN BIBREF60 employs a mirror structure, which reversely learns from generated images to output texts (an image-to-text process) to further validate whether generated are indeed consistent to the input texts. MirrowGAN includes three modules: a semantic text embedding module (STEM), a global-local collaborative attentive module for cascaded image generation (GLAM), and a semantic text regeneration and alignment module (STREAM). The back to back Text-to-Image (T2I) and Image-to-Text (I2T) are combined to progressively enhance the diversity and semantic consistency of the generated images.
black In order to enhance the diversity of the output image, Scene Graph GAN BIBREF61 proposes to use visual scene graphs to describe the layout of the objects, allowing users to precisely specific the relationships between objects in the images. In order to convert the visual scene graph as input for GAN to generate images, this method uses graph convolution to process input graphs. It computes a scene layout by predicting bounding boxes and segmentation masks for objects. After that, it converts the computed layout to an image with a cascaded refinement network.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs
Instead of focusing on generating static images, another line of text-to-image synthesis research focuses on generating videos (i.e. sequences of images) from texts. In this context, the synthesised videos are often useful resources for automated assistance or story telling.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs ::: ObamaNet and T2S
One early/interesting work of motion enhancement GANs is to generate spoofed speech and lip-sync videos (or talking face) of Barack Obama (i.e. ObamaNet) based on text input BIBREF62. This framework is consisted of three parts, i.e. text to speech using “Char2Wav”, mouth shape representation synced to the audio using a time-delayed LSTM and “video generation” conditioned on the mouth shape using “U-Net” architecture. Although the results seem promising, ObamaNet only models the mouth region and the videos are not generated from noise which can be regarded as video prediction other than video generation.
black Another meaningful trial of using synthesised videos for automated assistance is to translate spoken language (e.g. text) into sign language video sequences (i.e. T2S) BIBREF63. This is often achieved through a two step process: converting texts as meaningful units to generate images, followed by a learning component to arrange images into sequential order for best representation. More specifically, using RNN based machine translation methods, texts are translated into sign language gloss sequences. Then, glosses are mapped to skeletal pose sequences using a lookup-table. To generate videos, a conditional DCGAN with the input of concatenation of latent representation of the image for a base pose and skeletal pose information is built.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs ::: T2V
In BIBREF64, a text-to-video model (T2V) is proposed based on the cGAN in which the input is the isometric Gaussian noise with the text-gist vector served as the generator. A key component of generating videos from text is to train a conditional generative model to extract both static and dynamic information from text, followed by a hybrid framework combining a Variational Autoencoder (VAE) and a Generative Adversarial Network (GAN).
black More specifically, T2V relies on two types of features, static features and dynamic features, to generate videos. Static features, called “gist” are used to sketch text-conditioned background color and object layout structure. Dynamic features, on the other hand, are considered by transforming input text into an image filter which eventually forms the video generator which consists of three entangled neural networks. The text-gist vector is generated by a gist generator which maintains static information (e.g. background) and a text2filter which captures the dynamic information (i.e. actions) in the text to generate videos.
black As demonstrated in the paper BIBREF64, the generated videos are semantically related to the texts, but have a rather low quality (e.g. only $64 \times 64$ resolution).
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs ::: StoryGAN
Different from T2V which generates videos from a single text, StoryGAN aims to produce dynamic scenes consistent of specified texts (i.e. story written in a multi-sentence paragraph) using a sequential GAN model BIBREF65. Story encoder, context encoder, and discriminators are the main components of this model. By using stochastic sampling, the story encoder intends to learn an low-dimensional embedding vector for the whole story to keep the continuity of the story. The context encoder is proposed to capture contextual information during sequential image generation based on a deep RNN. Two discriminators of StoryGAN are image discriminator which evaluates the generated images and story discriminator which ensures the global consistency.
black The experiments and comparisons, on CLEVR dataset and Pororo cartoon dataset which are originally used for visual question answering, show that StoryGAN improves the generated video qualify in terms of Structural Similarity Index (SSIM), visual qualify, consistence, and relevance (the last three measure are based on human evaluation).
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Text-to-image Synthesis Applications
Computer vision applications have strong potential for industries including but not limited to the medical, government, military, entertainment, and online social media fields BIBREF7, BIBREF66, BIBREF67, BIBREF68, BIBREF69, BIBREF70. Text-to-image synthesis is one such application in computer vision AI that has become the main focus in recent years due to its potential for providing beneficial properties and opportunities for a wide range of applicable areas.
Text-to-image synthesis is an application byproduct of deep convolutional decoder networks in combination with GANs BIBREF7, BIBREF8, BIBREF10. Deep convolutional networks have contributed to several breakthroughs in image, video, speech, and audio processing. This learning method intends, among other possibilities, to help translate sequential text descriptions to images supplemented by one or many additional methods. Algorithms and methods developed in the computer vision field have allowed researchers in recent years to create realistic images from plain sentences. Advances in the computer vision, deep convolutional nets, and semantic units have shined light and redirected focus to this research area of text-to-image synthesis, having as its prime directive: to aid in the generation of compelling images with as much fidelity to text descriptions as possible.
To date, models for generating synthetic images from textual natural language in research laboratories at universities and private companies have yielded compelling images of flowers and birds BIBREF8. Though flowers and birds are the most common objects studied thus far, research has been applied to other classes as well. For example, there have been studies focused solely on human faces BIBREF7, BIBREF8, BIBREF71, BIBREF72.
It’s a fascinating time for computer vision AI and deep learning researchers and enthusiasts. The consistent advancement in hardware, software, and contemporaneous development of computer vision AI research disrupts multiple industries. These advances in technology allow for the extraction of several data types from a variety of sources. For example, image data captured from a variety of photo-ready devices, such as smart-phones, and online social media services opened the door to the analysis of large amounts of media datasets BIBREF70. The availability of large media datasets allow new frameworks and algorithms to be proposed and tested on real-world data.
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Text-to-image Synthesis Benchmark Datasets
A summary of some reviewed methods and benchmark datasets used for validation is reported in Table TABREF43. In addition, the performance of different GANs with respect to the benchmark datasets and performance metrics is reported in Table TABREF48.
In order to synthesize images from text descriptions, many frameworks have taken a minimalistic approach by creating small and background-less images BIBREF73. In most cases, the experiments were conducted on simple datasets, initially containing images of birds and flowers. BIBREF8 contributed to these data sets by adding corresponding natural language text descriptions to subsets of the CUB, MSCOCO, and Oxford-102 datasets, which facilitated the work on text-to-image synthesis for several papers released more recently.
While most deep learning algorithms use MNIST BIBREF74 dataset as the benchmark, there are three main datasets that are commonly used for evaluation of proposed GAN models for text-to-image synthesis: CUB BIBREF75, Oxford BIBREF76, COCO BIBREF77, and CIFAR-10 BIBREF78. CUB BIBREF75 contains 200 birds with matching text descriptions and Oxford BIBREF76 contains 102 categories of flowers with 40-258 images each and matching text descriptions. These datasets contain individual objects, with the text description corresponding to that object, making them relatively simple. COCO BIBREF77 is much more complex, containing 328k images with 91 different object types. CIFAI-10 BIBREF78 dataset consists of 60000 32$times$32 colour images in 10 classes, with 6000 images per class. In contrast to CUB and Oxford, whose images each contain an individual object, COCO’s images may contain multiple objects, each with a label, so there are many labels per image. The total number of labels over the 328k images is 2.5 million BIBREF77.
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Text-to-image Synthesis Benchmark Evaluation Metrics
Several evaluation metrics are used for judging the images produced by text-to-image GANs. Proposed by BIBREF25, Inception Scores (IS) calculates the entropy (randomness) of the conditional distribution, obtained by applying the Inception Model introduced in BIBREF79, and marginal distribution of a large set of generated images, which should be low and high, respectively, for meaningful images. Low entropy of conditional distribution means that the evaluator is confident that the images came from the data distribution, and high entropy of the marginal distribution means that the set of generated images is diverse, which are both desired features. The IS score is then computed as the KL-divergence between the two entropies. FCN-scores BIBREF2 are computed in a similar manner, relying on the intuition that realistic images generated by a GAN should be able to be classified correctly by a classifier trained on real images of the same distribution. Therefore, if the FCN classifier classifies a set of synthetic images accurately, the image is probably realistic, and the corresponding GAN gets a high FCN score. Frechet Inception Distance (FID) BIBREF80 is the other commonly used evaluation metric, and takes a different approach, actually comparing the generated images to real images in the distribution. A high FID means there is little relationship between statistics of the synthetic and real images and vice versa, so lower FIDs are better.
black The performance of different GANs with respect to the benchmark datasets and performance metrics is reported in Table TABREF48. In addition, Figure FIGREF49 further lists the performance of 14 GANs with respect to their Inception Scores (IS).
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: GAN Based Text-to-image Synthesis Results Comparison
While we gathered all the data we could find on scores for each model on the CUB, Oxford, and COCO datasets using IS, FID, FCN, and human classifiers, we unfortunately were unable to find certain data for AttnGAN and HDGAN (missing in Table TABREF48). The best evaluation we can give for those with missing data is our own opinions by looking at examples of generated images provided in their papers. In this regard, we observed that HDGAN produced relatively better visual results on the CUB and Oxford datasets while AttnGAN produced far more impressive results than the rest on the more complex COCO dataset. This is evidence that the attentional model and DAMSM introduced by AttnGAN are very effective in producing high-quality images. Examples of the best results of birds and plates of vegetables generated by each model are presented in Figures FIGREF50 and FIGREF51, respectively.
blackIn terms of inception score (IS), which is the metric that was applied to majority models except DC-GAN, the results in Table TABREF48 show that StackGAN++ only showed slight improvement over its predecessor, StackGAN, for text-to-image synthesis. However, StackGAN++ did introduce a very worthy enhancement for unconditional image generation by organizing the generators and discriminators in a “tree-like” structure. This indicates that revising the structures of the discriminators and/or generators can bring a moderate level of improvement in text-to-image synthesis.
blackIn addition, the results in Table TABREF48 also show that DM-GAN BIBREF53 has the best performance, followed by Obj-GAN BIBREF81. Notice that both DM-GAN and Obj-GAN are most recently developed methods in the field (both published in 2019), indicating that research in text to image synthesis is continuously improving the results for better visual perception and interception. Technical wise, DM-GAN BIBREF53 is a model using dynamic memory to refine fuzzy image contents initially generated from the GAN networks. A memory writing gate is used for DM-GAN to select important text information and generate images based on he selected text accordingly. On the other hand, Obj-GAN BIBREF81 focuses on object centered text-to-image synthesis. The proposed framework of Obj-GAN consists of a layout generation, including a bounding box generator and a shape generator, and an object-driven attentive image generator. The designs and advancement of DM-GAN and Obj-GAN indicate that research in text-to-image synthesis is advancing to put more emphasis on the image details and text semantics for better understanding and perception.
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Notable Mentions
It is worth noting that although this survey mainly focuses on text-to-image synthesis, there have been other applications of GANs in broader image synthesis field that we found fascinating and worth dedicating a small section to. For example, BIBREF72 used Sem-Latent GANs to generate images of faces based on facial attributes, producing impressive results that, at a glance, could be mistaken for real faces. BIBREF82, BIBREF70, and BIBREF83 demonstrated great success in generating text descriptions from images (image captioning) with great accuracy, with BIBREF82 using an attention-based model that automatically learns to focus on salient objects and BIBREF83 using deep visual-semantic alignments. Finally, there is a contribution made by StackGAN++ that was not mentioned in the dedicated section due to its relation to unconditional image generation as opposed to conditional, namely a color-regularization term BIBREF47. This additional term aims to keep the samples generated from the same input at different stages more consistent in color, which resulted in significantly better results for the unconditional model.
Conclusion
The recent advancement in text-to-image synthesis research opens the door to several compelling methods and architectures. The main objective of text-to-image synthesis initially was to create images from simple labels, and this objective later scaled to natural languages. In this paper, we reviewed novel methods that generate, in our opinion, the most visually-rich and photo-realistic images, from text-based natural language. These generated images often rely on generative adversarial networks (GANs), deep convolutional decoder networks, and multimodal learning methods.
blackIn the paper, we first proposed a taxonomy to organize GAN based text-to-image synthesis frameworks into four major groups: semantic enhancement GANs, resolution enhancement GANs, diversity enhancement GANs, and motion enhancement GANs. The taxonomy provides a clear roadmap to show the motivations, architectures, and difference of different methods, and also outlines their evolution timeline and relationships. Following the proposed taxonomy, we reviewed important features of each method and their architectures. We indicated the model definition and key contributions from some advanced GAN framworks, including StackGAN, StackGAN++, AttnGAN, DC-GAN, AC-GAN, TAC-GAN, HDGAN, Text-SeGAn, StoryGAN etc. Many of the solutions surveyed in this paper tackled the highly complex challenge of generating photo-realistic images beyond swatch size samples. In other words, beyond the work of BIBREF8 in which images were generated from text in 64$\times $64 tiny swatches. Lastly, all methods were evaluated on datasets that included birds, flowers, humans, and other miscellaneous elements. We were also able to allocate some important papers that were as impressive as the papers we finally surveyed. Though, these notable papers have yet to contribute directly or indirectly to the expansion of the vast computer vision AI field. Looking into the future, an excellent extension from the works surveyed in this paper would be to give more independence to the several learning methods (e.g. less human intervention) involved in the studies as well as increasing the size of the output images.
conflict of interest
The authors declare that there is no conflict of interest regarding the publication of this article. | Even though natural language and image synthesis were part of several contributions on the supervised side of deep learning, unsupervised learning saw recently a tremendous rise in input from the research community specially on two subproblems: text-based natural language and image synthesis |
c1477a6c86bd1670dd17407590948000c9a6b7c6 | c1477a6c86bd1670dd17407590948000c9a6b7c6_0 | Q: What challenges remain unresolved?
Text: Introduction
“ (GANs), and the variations that are now being proposed is the most interesting idea in the last 10 years in ML, in my opinion.” (2016)
– Yann LeCun
A picture is worth a thousand words! While written text provide efficient, effective, and concise ways for communication, visual content, such as images, is a more comprehensive, accurate, and intelligible method of information sharing and understanding. Generation of images from text descriptions, i.e. text-to-image synthesis, is a complex computer vision and machine learning problem that has seen great progress over recent years. Automatic image generation from natural language may allow users to describe visual elements through visually-rich text descriptions. The ability to do so effectively is highly desirable as it could be used in artificial intelligence applications such as computer-aided design, image editing BIBREF0, BIBREF1, game engines for the development of the next generation of video gamesBIBREF2, and pictorial art generation BIBREF3.
Introduction ::: blackTraditional Learning Based Text-to-image Synthesis
In the early stages of research, text-to-image synthesis was mainly carried out through a search and supervised learning combined process BIBREF4, as shown in Figure FIGREF4. In order to connect text descriptions to images, one could use correlation between keywords (or keyphrase) & images that identifies informative and “picturable” text units; then, these units would search for the most likely image parts conditioned on the text, eventually optimizing the picture layout conditioned on both the text and the image parts. Such methods often integrated multiple artificial intelligence key components, including natural language processing, computer vision, computer graphics, and machine learning.
The major limitation of the traditional learning based text-to-image synthesis approaches is that they lack the ability to generate new image content; they can only change the characteristics of the given/training images. Alternatively, research in generative models has advanced significantly and delivers solutions to learn from training images and produce new visual content. For example, Attribute2Image BIBREF5 models each image as a composite of foreground and background. In addition, a layered generative model with disentangled latent variables is learned, using a variational auto-encoder, to generate visual content. Because the learning is customized/conditioned by given attributes, the generative models of Attribute2Image can generate images with respect to different attributes, such as gender, hair color, age, etc., as shown in Figure FIGREF5.
Introduction ::: GAN Based Text-to-image Synthesis
Although generative model based text-to-image synthesis provides much more realistic image synthesis results, the image generation is still conditioned by the limited attributes. In recent years, several papers have been published on the subject of text-to-image synthesis. Most of the contributions from these papers rely on multimodal learning approaches that include generative adversarial networks and deep convolutional decoder networks as their main drivers to generate entrancing images from text BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11.
First introduced by Ian Goodfellow et al. BIBREF9, generative adversarial networks (GANs) consist of two neural networks paired with a discriminator and a generator. These two models compete with one another, with the generator attempting to produce synthetic/fake samples that will fool the discriminator and the discriminator attempting to differentiate between real (genuine) and synthetic samples. Because GANs' adversarial training aims to cause generators to produce images similar to the real (training) images, GANs can naturally be used to generate synthetic images (image synthesis), and this process can even be customized further by using text descriptions to specify the types of images to generate, as shown in Figure FIGREF6.
Much like text-to-speech and speech-to-text conversion, there exists a wide variety of problems that text-to-image synthesis could solve in the computer vision field specifically BIBREF8, BIBREF12. Nowadays, researchers are attempting to solve a plethora of computer vision problems with the aid of deep convolutional networks, generative adversarial networks, and a combination of multiple methods, often called multimodal learning methods BIBREF8. For simplicity, multiple learning methods will be referred to as multimodal learning hereafter BIBREF13. Researchers often describe multimodal learning as a method that incorporates characteristics from several methods, algorithms, and ideas. This can include ideas from two or more learning approaches in order to create a robust implementation to solve an uncommon problem or improve a solution BIBREF8, BIBREF14, BIBREF15, BIBREF16, BIBREF17.
black In this survey, we focus primarily on reviewing recent works that aim to solve the challenge of text-to-image synthesis using generative adversarial networks (GANs). In order to provide a clear roadmap, we propose a taxonomy to summarize reviewed GANs into four major categories. Our review will elaborate the motivations of methods in each category, analyze typical models, their network architectures, and possible drawbacks for further improvement. The visual abstract of the survey and the list of reviewed GAN frameworks is shown in Figure FIGREF8.
black The remainder of the survey is organized as follows. Section 2 presents a brief summary of existing works on subjects similar to that of this paper and highlights the key distinctions making ours unique. Section 3 gives a short introduction to GANs and some preliminary concepts related to image generation, as they are the engines that make text-to-image synthesis possible and are essential building blocks to achieve photo-realistic images from text descriptions. Section 4 proposes a taxonomy to summarize GAN based text-to-image synthesis, discusses models and architectures of novel works focused solely on text-to-image synthesis. This section will also draw key contributions from these works in relation to their applications. Section 5 reviews GAN based text-to-image synthesis benchmarks, performance metrics, and comparisons, including a simple review of GANs for other applications. In section 6, we conclude with a brief summary and outline ideas for future interesting developments in the field of text-to-image synthesis.
Related Work
With the growth and success of GANs, deep convolutional decoder networks, and multimodal learning methods, these techniques were some of the first procedures which aimed to solve the challenge of image synthesis. Many engineers and scientists in computer vision and AI have contributed through extensive studies and experiments, with numerous proposals and publications detailing their contributions. Because GANs, introduced by BIBREF9, are emerging research topics, their practical applications to image synthesis are still in their infancy. Recently, many new GAN architectures and designs have been proposed to use GANs for different applications, e.g. using GANs to generate sentimental texts BIBREF18, or using GANs to transform natural images into cartoons BIBREF19.
Although GANs are becoming increasingly popular, very few survey papers currently exist to summarize and outline contemporaneous technical innovations and contributions of different GAN architectures BIBREF20, BIBREF21. Survey papers specifically attuned to analyzing different contributions to text-to-image synthesis using GANs are even more scarce. We have thus found two surveys BIBREF6, BIBREF7 on image synthesis using GANs, which are the two most closely related publications to our survey objective. In the following paragraphs, we briefly summarize each of these surveys and point out how our objectives differ from theirs.
In BIBREF6, the authors provide an overview of image synthesis using GANs. In this survey, the authors discuss the motivations for research on image synthesis and introduce some background information on the history of GANs, including a section dedicated to core concepts of GANs, namely generators, discriminators, and the min-max game analogy, and some enhancements to the original GAN model, such as conditional GANs, addition of variational auto-encoders, etc.. In this survey, we will carry out a similar review of the background knowledge because the understanding of these preliminary concepts is paramount for the rest of the paper. Three types of approaches for image generation are reviewed, including direct methods (single generator and discriminator), hierarchical methods (two or more generator-discriminator pairs, each with a different goal), and iterative methods (each generator-discriminator pair generates a gradually higher-resolution image). Following the introduction, BIBREF6 discusses methods for text-to-image and image-to-image synthesis, respectively, and also describes several evaluation metrics for synthetic images, including inception scores and Frechet Inception Distance (FID), and explains the significance of the discriminators acting as learned loss functions as opposed to fixed loss functions.
Different from the above survey, which has a relatively broad scope in GANs, our objective is heavily focused on text-to-image synthesis. Although this topic, text-to-image synthesis, has indeed been covered in BIBREF6, they did so in a much less detailed fashion, mostly listing the many different works in a time-sequential order. In comparison, we will review several representative methods in the field and outline their models and contributions in detail.
Similarly to BIBREF6, the second survey paper BIBREF7 begins with a standard introduction addressing the motivation of image synthesis and the challenges it presents followed by a section dedicated to core concepts of GANs and enhancements to the original GAN model. In addition, the paper covers the review of two types of applications: (1) unconstrained applications of image synthesis such as super-resolution, image inpainting, etc., and (2) constrained image synthesis applications, namely image-to-image, text-to-image, and sketch-to image, and also discusses image and video editing using GANs. Again, the scope of this paper is intrinsically comprehensive, while we focus specifically on text-to-image and go into more detail regarding the contributions of novel state-of-the-art models.
Other surveys have been published on related matters, mainly related to the advancements and applications of GANs BIBREF22, BIBREF23, but we have not found any prior works which focus specifically on text-to-image synthesis using GANs. To our knowledge, this is the first paper to do so.
black
Preliminaries and Frameworks
In this section, we first introduce preliminary knowledge of GANs and one of its commonly used variants, conditional GAN (i.e. cGAN), which is the building block for many GAN based text-to-image synthesis models. After that, we briefly separate GAN based text-to-image synthesis into two types, Simple GAN frameworks vs. Advanced GAN frameworks, and discuss why advanced GAN architecture for image synthesis.
black Notice that the simple vs. advanced GAN framework separation is rather too brief, our taxonomy in the next section will propose a taxonomy to summarize advanced GAN frameworks into four categories, based on their objective and designs.
Preliminaries and Frameworks ::: Generative Adversarial Neural Network
Before moving on to a discussion and analysis of works applying GANs for text-to-image synthesis, there are some preliminary concepts, enhancements of GANs, datasets, and evaluation metrics that are present in some of the works described in the next section and are thus worth introducing.
As stated previously, GANs were introduced by Ian Goodfellow et al. BIBREF9 in 2014, and consist of two deep neural networks, a generator and a discriminator, which are trained independently with conflicting goals: The generator aims to generate samples closely related to the original data distribution and fool the discriminator, while the discriminator aims to distinguish between samples from the generator model and samples from the true data distribution by calculating the probability of the sample coming from either source. A conceptual view of the generative adversarial network (GAN) architecture is shown in Figure FIGREF11.
The training of GANs is an iterative process that, with each iteration, updates the generator and the discriminator with the goal of each defeating the other. leading each model to become increasingly adept at its specific task until a threshold is reached. This is analogous to a min-max game between the two models, according to the following equation:
In Eq. (DISPLAY_FORM10), $x$ denotes a multi-dimensional sample, e.g., an image, and $z$ denotes a multi-dimensional latent space vector, e.g., a multidimensional data point following a predefined distribution function such as that of normal distributions. $D_{\theta _d}()$ denotes a discriminator function, controlled by parameters $\theta _d$, which aims to classify a sample into a binary space. $G_{\theta _g}()$ denotes a generator function, controlled by parameters $\theta _g$, which aims to generate a sample from some latent space vector. For example, $G_{\theta _g}(z)$ means using a latent vector $z$ to generate a synthetic/fake image, and $D_{\theta _d}(x)$ means to classify an image $x$ as binary output (i.e. true/false or 1/0). In the GAN setting, the discriminator $D_{\theta _d}()$ is learned to distinguish a genuine/true image (labeled as 1) from fake images (labeled as 0). Therefore, given a true image $x$, the ideal output from the discriminator $D_{\theta _d}(x)$ would be 1. Given a fake image generated from the generator $G_{\theta _g}(z)$, the ideal prediction from the discriminator $D_{\theta _d}(G_{\theta _g}(z))$ would be 0, indicating the sample is a fake image.
Following the above definition, the $\min \max $ objective function in Eq. (DISPLAY_FORM10) aims to learn parameters for the discriminator ($\theta _d$) and generator ($\theta _g$) to reach an optimization goal: The discriminator intends to differentiate true vs. fake images with maximum capability $\max _{\theta _d}$ whereas the generator intends to minimize the difference between a fake image vs. a true image $\min _{\theta _g}$. In other words, the discriminator sets the characteristics and the generator produces elements, often images, iteratively until it meets the attributes set forth by the discriminator. GANs are often used with images and other visual elements and are notoriously efficient in generating compelling and convincing photorealistic images. Most recently, GANs were used to generate an original painting in an unsupervised fashion BIBREF24. The following sections go into further detail regarding how the generator and discriminator are trained in GANs.
Generator - In image synthesis, the generator network can be thought of as a mapping from one representation space (latent space) to another (actual data) BIBREF21. When it comes to image synthesis, all of the images in the data space fall into some distribution in a very complex and high-dimensional feature space. Sampling from such a complex space is very difficult, so GANs instead train a generator to create synthetic images from a much more simple feature space (usually random noise) called the latent space. The generator network performs up-sampling of the latent space and is usually a deep neural network consisting of several convolutional and/or fully connected layers BIBREF21. The generator is trained using gradient descent to update the weights of the generator network with the aim of producing data (in our case, images) that the discriminator classifies as real.
Discriminator - The discriminator network can be thought of as a mapping from image data to the probability of the image coming from the real data space, and is also generally a deep neural network consisting of several convolution and/or fully connected layers. However, the discriminator performs down-sampling as opposed to up-sampling. Like the generator, it is trained using gradient descent but its goal is to update the weights so that it is more likely to correctly classify images as real or fake.
In GANs, the ideal outcome is for both the generator's and discriminator's cost functions to converge so that the generator produces photo-realistic images that are indistinguishable from real data, and the discriminator at the same time becomes an expert at differentiating between real and synthetic data. This, however, is not possible since a reduction in cost of one model generally leads to an increase in cost of the other. This phenomenon makes training GANs very difficult, and training them simultaneously (both models performing gradient descent in parallel) often leads to a stable orbit where neither model is able to converge. To combat this, the generator and discriminator are often trained independently. In this case, the GAN remains the same, but there are different training stages. In one stage, the weights of the generator are kept constant and gradient descent updates the weights of the discriminator, and in the other stage the weights of the discriminator are kept constant while gradient descent updates the weights of the generator. This is repeated for some number of epochs until a desired low cost for each model is reached BIBREF25.
Preliminaries and Frameworks ::: cGAN: Conditional GAN
Conditional Generative Adversarial Networks (cGAN) are an enhancement of GANs proposed by BIBREF26 shortly after the introduction of GANs by BIBREF9. The objective function of the cGAN is defined in Eq. (DISPLAY_FORM13) which is very similar to the GAN objective function in Eq. (DISPLAY_FORM10) except that the inputs to both discriminator and generator are conditioned by a class label $y$.
The main technical innovation of cGAN is that it introduces an additional input or inputs to the original GAN model, allowing the model to be trained on information such as class labels or other conditioning variables as well as the samples themselves, concurrently. Whereas the original GAN was trained only with samples from the data distribution, resulting in the generated sample reflecting the general data distribution, cGAN enables directing the model to generate more tailored outputs.
In Figure FIGREF14, the condition vector is the class label (text string) "Red bird", which is fed to both the generator and discriminator. It is important, however, that the condition vector is related to the real data. If the model in Figure FIGREF14 was trained with the same set of real data (red birds) but the condition text was "Yellow fish", the generator would learn to create images of red birds when conditioned with the text "Yellow fish".
Note that the condition vector in cGAN can come in many forms, such as texts, not just limited to the class label. Such a unique design provides a direct solution to generate images conditioned by predefined specifications. As a result, cGAN has been used in text-to-image synthesis since the very first day of its invention although modern approaches can deliver much better text-to-image synthesis results.
black
Preliminaries and Frameworks ::: Simple GAN Frameworks for Text-to-Image Synthesis
In order to generate images from text, one simple solution is to employ the conditional GAN (cGAN) designs and add conditions to the training samples, such that the GAN is trained with respect to the underlying conditions. Several pioneer works have followed similar designs for text-to-image synthesis.
black An essential disadvantage of using cGAN for text-to-image synthesis is that that it cannot handle complicated textual descriptions for image generation, because cGAN uses labels as conditions to restrict the GAN inputs. If the text inputs have multiple keywords (or long text descriptions) they cannot be used simultaneously to restrict the input. Instead of using text as conditions, another two approaches BIBREF8, BIBREF16 use text as input features, and concatenate such features with other features to train discriminator and generator, as shown in Figure FIGREF15(b) and (c). To ensure text being used as GAN input, a feature embedding or feature representation learning BIBREF29, BIBREF30 function $\varphi ()$ is often introduced to convert input text as numeric features, which are further concatenated with other features to train GANs.
black
Preliminaries and Frameworks ::: Advanced GAN Frameworks for Text-to-Image Synthesis
Motivated by the GAN and conditional GAN (cGAN) design, many GAN based frameworks have been proposed to generate images, with different designs and architectures, such as using multiple discriminators, using progressively trained discriminators, or using hierarchical discriminators. Figure FIGREF17 outlines several advanced GAN frameworks in the literature. In addition to these frameworks, many news designs are being proposed to advance the field with rather sophisticated designs. For example, a recent work BIBREF37 proposes to use a pyramid generator and three independent discriminators, blackeach focusing on a different aspect of the images, to lead the generator towards creating images that are photo-realistic on multiple levels. Another recent publication BIBREF38 proposes to use discriminator to measure semantic relevance between image and text instead of class prediction (like most discriminator in GANs does), resulting a new GAN structure outperforming text conditioned auxiliary classifier (TAC-GAN) BIBREF16 and generating diverse, realistic, and relevant to the input text regardless of class.
black In the following section, we will first propose a taxonomy that summarizes advanced GAN frameworks for text-to-image synthesis, and review most recent proposed solutions to the challenge of generating photo-realistic images conditioned on natural language text descriptions using GANs. The solutions we discuss are selected based on relevance and quality of contributions. Many publications exist on the subject of image-generation using GANs, but in this paper we focus specifically on models for text-to-image synthesis, with the review emphasizing on the “model” and “contributions” for text-to-image synthesis. At the end of this section, we also briefly review methods using GANs for other image-synthesis applications.
black
Text-to-Image Synthesis Taxonomy and Categorization
In this section, we propose a taxonomy to summarize advanced GAN based text-to-image synthesis frameworks, as shown in Figure FIGREF24. The taxonomy organizes GAN frameworks into four categories, including Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANs, and Motion Enhancement GAGs. Following the proposed taxonomy, each subsection will introduce several typical frameworks and address their techniques of using GANS to solve certain aspects of the text-to-mage synthesis challenges.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: GAN based Text-to-Image Synthesis Taxonomy
Although the ultimate goal of Text-to-Image synthesis is to generate images closely related to the textual descriptions, the relevance of the images to the texts are often validated from different perspectives, due to the inherent diversity of human perceptions. For example, when generating images matching to the description “rose flowers”, some users many know the exact type of flowers they like and intend to generate rose flowers with similar colors. Other users, may seek to generate high quality rose flowers with a nice background (e.g. garden). The third group of users may be more interested in generating flowers similar to rose but with different colors and visual appearance, e.g. roses, begonia, and peony. The fourth group of users may want to not only generate flower images, but also use them to form a meaningful action, e.g. a video clip showing flower growth, performing a magic show using those flowers, or telling a love story using the flowers.
blackFrom the text-to-Image synthesis point of view, the first group of users intend to precisely control the semantic of the generated images, and their goal is to match the texts and images at the semantic level. The second group of users are more focused on the resolutions and the qualify of the images, in addition to the requirement that the images and texts are semantically related. For the third group of users, their goal is to diversify the output images, such that their images carry diversified visual appearances and are also semantically related. The fourth user group adds a new dimension in image synthesis, and aims to generate sequences of images which are coherent in temporal order, i.e. capture the motion information.
black Based on the above descriptions, we categorize GAN based Text-to-Image Synthesis into a taxonomy with four major categories, as shown in Fig. FIGREF24.
Semantic Enhancement GANs: Semantic enhancement GANs represent pioneer works of GAN frameworks for text-to-image synthesis. The main focus of the GAN frameworks is to ensure that the generated images are semantically related to the input texts. This objective is mainly achieved by using a neural network to encode texts as dense features, which are further fed to a second network to generate images matching to the texts.
Resolution Enhancement GANs: Resolution enhancement GANs mainly focus on generating high qualify images which are semantically matched to the texts. This is mainly achieved through a multi-stage GAN framework, where the outputs from earlier stage GANs are fed to the second (or later) stage GAN to generate better qualify images.
Diversity Enhancement GANs: Diversity enhancement GANs intend to diversify the output images, such that the generated images are not only semantically related but also have different types and visual appearance. This objective is mainly achieved through an additional component to estimate semantic relevance between generated images and texts, in order to maximize the output diversity.
Motion Enhancement GANs: Motion enhancement GANs intend to add a temporal dimension to the output images, such that they can form meaningful actions with respect to the text descriptions. This goal mainly achieved though a two-step process which first generates images matching to the “actions” of the texts, followed by a mapping or alignment procedure to ensure that images are coherent in the temporal order.
black In the following, we will introduce how these GAN frameworks evolve for text-to-image synthesis, and will also review some typical methods of each category.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs
Semantic relevance is one the of most important criteria of the text-to-image synthesis. For most GNAs discussed in this survey, they are required to generate images semantically related to the text descriptions. However, the semantic relevance is a rather subjective measure, and images are inherently rich in terms of its semantics and interpretations. Therefore, many GANs are further proposed to enhance the text-to-image synthesis from different perspectives. In this subsection, we will review several classical approaches which are commonly served as text-to-image synthesis baseline.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs ::: DC-GAN
Deep convolution generative adversarial network (DC-GAN) BIBREF8 represents the pioneer work for text-to-image synthesis using GANs. Its main goal is to train a deep convolutional generative adversarial network (DC-GAN) on text features. During this process these text features are encoded by another neural network. This neural network is a hybrid convolutional recurrent network at the character level. Concurrently, both neural networks have also feed-forward inference in the way they condition text features. Generating realistic images automatically from natural language text is the motivation of several of the works proposed in this computer vision field. However, actual artificial intelligence (AI) systems are far from achieving this task BIBREF8, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. Lately, recurrent neural networks led the way to develop frameworks that learn discriminatively on text features. At the same time, generative adversarial networks (GANs) began recently to show some promise on generating compelling images of a whole host of elements including but not limited to faces, birds, flowers, and non-common images such as room interiorsBIBREF8. DC-GAN is a multimodal learning model that attempts to bridge together both of the above mentioned unsupervised machine learning algorithms, the recurrent neural networks (RNN) and generative adversarial networks (GANs), with the sole purpose of speeding the generation of text-to-image synthesis.
black Deep learning shed some light to some of the most sophisticated advances in natural language representation, image synthesis BIBREF7, BIBREF8, BIBREF43, BIBREF35, and classification of generic data BIBREF44. However, a bulk of the latest breakthroughs in deep learning and computer vision were related to supervised learning BIBREF8. Even though natural language and image synthesis were part of several contributions on the supervised side of deep learning, unsupervised learning saw recently a tremendous rise in input from the research community specially on two subproblems: text-based natural language and image synthesis BIBREF45, BIBREF14, BIBREF8, BIBREF46, BIBREF47. These subproblems are typically subdivided as focused research areas. DC-GAN's contributions are mainly driven by these two research areas. In order to generate plausible images from natural language, DC-GAN contributions revolve around developing a straightforward yet effective GAN architecture and training strategy that allows natural text to image synthesis. These contributions are primarily tested on the Caltech-UCSD Birds and Oxford-102 Flowers datasets. Each image in these datasets carry five text descriptions. These text descriptions were created by the research team when setting up the evaluation environment. The DC-GANs model is subsequently trained on several subcategories. Subcategories in this research represent the training and testing sub datasets. The performance shown by these experiments display a promising yet effective way to generate images from textual natural language descriptions BIBREF8.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs ::: DC-GAN Extensions
Following the pioneer DC-GAN framework BIBREF8, many researches propose revised network structures (e.g. different discriminaotrs) in order to improve images with better semantic relevance to the texts. Based on the deep convolutional adversarial network (DC-GAN) network architecture, GAN-CLS with image-text matching discriminator, GAN-INT learned with text manifold interpolation and GAN-INT-CLS which combines both are proposed to find semantic match between text and image. Similar to the DC-GAN architecture, an adaptive loss function (i.e. Perceptual Loss BIBREF48) is proposed for semantic image synthesis which can synthesize a realistic image that not only matches the target text description but also keep the irrelavant features(e.g. background) from source images BIBREF49. Regarding to the Perceptual Losses, three loss functions (i.e. Pixel reconstruction loss, Activation reconstruction loss and Texture reconstruction loss) are proposed in BIBREF50 in which they construct the network architectures based on the DC-GAN, i.e. GAN-INT-CLS-Pixel, GAN-INT-CLS-VGG and GAN-INT-CLS-Gram with respect to three losses. In BIBREF49, a residual transformation unit is added in the network to retain similar structure of the source image.
black Following the BIBREF49 and considering the features in early layers address background while foreground is obtained in latter layers in CNN, a pair of discriminators with different architectures (i.e. Paired-D GAN) is proposed to synthesize background and foreground from a source image seperately BIBREF51. Meanwhile, the skip-connection in the generator is employed to more precisely retain background information in the source image.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs ::: MC-GAN
When synthesising images, most text-to-image synthesis methods consider each output image as one single unit to characterize its semantic relevance to the texts. This is likely problematic because most images naturally consist of two crucial components: foreground and background. Without properly separating these two components, it's hard to characterize the semantics of an image if the whole image is treated as a single unit without proper separation.
black In order to enhance the semantic relevance of the images, a multi-conditional GAN (MC-GAN) BIBREF52 is proposed to synthesize a target image by combining the background of a source image and a text-described foreground object which does not exist in the source image. A unique feature of MC-GAN is that it proposes a synthesis block in which the background feature is extracted from the given image without non-linear function (i.e. only using convolution and batch normalization) and the foreground feature is the feature map from the previous layer.
black Because MC-GAN is able to properly model the background and foreground of the generated images, a unique strength of MC-GAN is that users are able to provide a base image and MC-GAN is able to preserve the background information of the base image to generate new images. black
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs
Due to the fact that training GANs will be much difficult when generating high-resolution images, a two stage GAN (i.e. stackGAN) is proposed in which rough images(i.e. low-resolution images) are generated in stage-I and refined in stage-II. To further improve the quality of generated images, the second version of StackGAN (i.e. Stack++) is proposed to use multi-stage GANs to generate multi-scale images. A color-consistency regularization term is also added into the loss to keep the consistency of images in different scales.
black While stackGAN and StackGAN++ are both built on the global sentence vector, AttnGAN is proposed to use attention mechanism (i.e. Deep Attentional Multimodal Similarity Model (DAMSM)) to model the multi-level information (i.e. word level and sentence level) into GANs. In the following, StackGAN, StackGAN++ and AttnGAN will be explained in detail.
black Recently, Dynamic Memory Generative Adversarial Network (i.e. DM-GAN)BIBREF53 which uses a dynamic memory component is proposed to focus on refiningthe initial generated image which is the key to the success of generating high quality images.
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: StackGAN
In 2017, Zhang et al. proposed a model for generating photo-realistic images from text descriptions called StackGAN (Stacked Generative Adversarial Network) BIBREF33. In their work, they define a two-stage model that uses two cascaded GANs, each corresponding to one of the stages. The stage I GAN takes a text description as input, converts the text description to a text embedding containing several conditioning variables, and generates a low-quality 64x64 image with rough shapes and colors based on the computed conditioning variables. The stage II GAN then takes this low-quality stage I image as well as the same text embedding and uses the conditioning variables to correct and add more detail to the stage I result. The output of stage II is a photorealistic 256$times$256 image that resembles the text description with compelling accuracy.
One major contribution of StackGAN is the use of cascaded GANs for text-to-image synthesis through a sketch-refinement process. By conditioning the stage II GAN on the image produced by the stage I GAN and text description, the stage II GAN is able to correct defects in the stage I output, resulting in high-quality 256x256 images. Prior works have utilized “stacked” GANs to separate the image generation process into structure and style BIBREF42, multiple stages each generating lower-level representations from higher-level representations of the previous stage BIBREF35, and multiple stages combined with a laplacian pyramid approach BIBREF54, which was introduced for image compression by P. Burt and E. Adelson in 1983 and uses the differences between consecutive down-samples of an original image to reconstruct the original image from its down-sampled version BIBREF55. However, these works did not use text descriptions to condition their generator models.
Conditioning Augmentation is the other major contribution of StackGAN. Prior works transformed the natural language text description into a fixed text embedding containing static conditioning variables which were fed to the generator BIBREF8. StackGAN does this and then creates a Gaussian distribution from the text embedding and randomly selects variables from the Gaussian distribution to add to the set of conditioning variables during training. This encourages robustness by introducing small variations to the original text embedding for a particular training image while keeping the training image that the generated output is compared to the same. The result is that the trained model produces more diverse images in the same distribution when using Conditioning Augmentation than the same model using a fixed text embedding BIBREF33.
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: StackGAN++
Proposed by the same users as StackGAN, StackGAN++ is also a stacked GAN model, but organizes the generators and discriminators in a “tree-like” structure BIBREF47 with multiple stages. The first stage combines a noise vector and conditioning variables (with Conditional Augmentation introduced in BIBREF33) for input to the first generator, which generates a low-resolution image, 64$\times $64 by default (this can be changed depending on the desired number of stages). Each following stage uses the result from the previous stage and the conditioning variables to produce gradually higher-resolution images. These stages do not use the noise vector again, as the creators assume that the randomness it introduces is already preserved in the output of the first stage. The final stage produces a 256$\times $256 high-quality image.
StackGAN++ introduces the joint conditional and unconditional approximation in their designs BIBREF47. The discriminators are trained to calculate the loss between the image produced by the generator and the conditioning variables (measuring how accurately the image represents the description) as well as the loss between the image and real images (probability of the image being real or fake). The generators then aim to minimize the sum of these losses, improving the final result.
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: AttnGAN
Attentional Generative Adversarial Network (AttnGAN) BIBREF10 is very similar, in terms of its structure, to StackGAN++ BIBREF47, discussed in the previous section, but some novel components are added. Like previous works BIBREF56, BIBREF8, BIBREF33, BIBREF47, a text encoder generates a text embedding with conditioning variables based on the overall sentence. Additionally, the text encoder generates a separate text embedding with conditioning variables based on individual words. This process is optimized to produce meaningful variables using a bidirectional recurrent neural network (BRNN), more specifically bidirectional Long Short Term Memory (LSTM) BIBREF57, which, for each word in the description, generates conditions based on the previous word as well as the next word (bidirectional). The first stage of AttnGAN generates a low-resolution image based on the sentence-level text embedding and random noise vector. The output is fed along with the word-level text embedding to an “attention model”, which matches the word-level conditioning variables to regions of the stage I image, producing a word-context matrix. This is then fed to the next stage of the model along with the raw previous stage output. Each consecutive stage works in the same manner, but produces gradually higher-resolution images conditioned on the previous stage.
Two major contributions were introduced in AttnGAN: the attentional generative network and the Deep Attentional Multimodal Similarity Model (DAMSM) BIBREF47. The attentional generative network matches specific regions of each stage's output image to conditioning variables from the word-level text embedding. This is a very worthy contribution, allowing each consecutive stage to focus on specific regions of the image independently, adding “attentional” details region by region as opposed to the whole image. The DAMSM is also a key feature introduced by AttnGAN, which is used after the result of the final stage to calculate the similarity between the generated image and the text embedding at both the sentence level and the more fine-grained word level. Table TABREF48 shows scores from different metrics for StackGAN, StackGAN++, AttnGAN, and HDGAN on the CUB, Oxford, and COCO datasets. The table shows that AttnGAN outperforms the other models in terms of IS on the CUB dataset by a small amount and greatly outperforms them on the COCO dataset.
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: HDGAN
Hierarchically-nested adversarial network (HDGAN) is a method proposed by BIBREF36, and its main objective is to tackle the difficult problem of dealing with photographic images from semantic text descriptions. These semantic text descriptions are applied on images from diverse datasets. This method introduces adversarial objectives nested inside hierarchically oriented networks BIBREF36. Hierarchical networks helps regularize mid-level manifestations. In addition to regularize mid-level manifestations, it assists the training of the generator in order to capture highly complex still media elements. These elements are captured in statistical order to train the generator based on settings extracted directly from the image. The latter is an ideal scenario. However, this paper aims to incorporate a single-stream architecture. This single-stream architecture functions as the generator that will form an optimum adaptability towards the jointed discriminators. Once jointed discriminators are setup in an optimum manner, the single-stream architecture will then advance generated images to achieve a much higher resolution BIBREF36.
The main contributions of the HDGANs include the introduction of a visual-semantic similarity measure BIBREF36. This feature will aid in the evaluation of the consistency of generated images. In addition to checking the consistency of generated images, one of the key objectives of this step is to test the logical consistency of the end product BIBREF36. The end product in this case would be images that are semantically mapped from text-based natural language descriptions to each area on the picture e.g. a wing on a bird or petal on a flower. Deep learning has created a multitude of opportunities and challenges for researchers in the computer vision AI field. Coupled with GAN and multimodal learning architectures, this field has seen tremendous growth BIBREF8, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. Based on these advancements, HDGANs attempt to further extend some desirable and less common features when generating images from textual natural language BIBREF36. In other words, it takes sentences and treats them as a hierarchical structure. This has some positive and negative implications in most cases. For starters, it makes it more complex to generate compelling images. However, one of the key benefits of this elaborate process is the realism obtained once all processes are completed. In addition, one common feature added to this process is the ability to identify parts of sentences with bounding boxes. If a sentence includes common characteristics of a bird, it will surround the attributes of such bird with bounding boxes. In practice, this should happen if the desired image have other elements such as human faces (e.g. eyes, hair, etc), flowers (e.g. petal size, color, etc), or any other inanimate object (e.g. a table, a mug, etc). Finally, HDGANs evaluated some of its claims on common ideal text-to-image datasets such as CUB, COCO, and Oxford-102 BIBREF8, BIBREF36, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. These datasets were first utilized on earlier works BIBREF8, and most of them sport modified features such image annotations, labels, or descriptions. The qualitative and quantitative results reported by researchers in this study were far superior of earlier works in this same field of computer vision AI.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs
In this subsection, we introduce text-to-image synthesis methods which try to maximize the diversity of the output images, based on the text descriptions.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: AC-GAN
Two issues arise in the traditional GANs BIBREF58 for image synthesis: (1) scalabilirty problem: traditional GANs cannot predict a large number of image categories; and (2) diversity problem: images are often subject to one-to-many mapping, so one image could be labeled as different tags or being described using different texts. To address these problems, GAN conditioned on additional information, e.g. cGAN, is an alternative solution. However, although cGAN and many previously introduced approaches are able to generate images with respect to the text descriptions, they often output images with similar types and visual appearance.
black Slightly different from the cGAN, auxiliary classifier GANs (AC-GAN) BIBREF27 proposes to improve the diversity of output images by using an auxiliary classifier to control output images. The overall structure of AC-GAN is shown in Fig. FIGREF15(c). In AC-GAN, every generated image is associated with a class label, in addition to the true/fake label which are commonly used in GAN or cGAN. The discriminator of AC-GAN not only outputs a probability distribution over sources (i.e. whether the image is true or fake), it also output a probability distribution over the class label (i.e. predict which class the image belong to).
black By using an auxiliary classifier layer to predict the class of the image, AC-GAN is able to use the predicted class labels of the images to ensure that the output consists of images from different classes, resulting in diversified synthesis images. The results show that AC-GAN can generate images with high diversity.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: TAC-GAN
Building on the AC-GAN, TAC-GAN BIBREF59 is proposed to replace the class information with textual descriptions as the input to perform the task of text to image synthesis. The architecture of TAC-GAN is shown in Fig. FIGREF15(d), which is similar to AC-GAN. Overall, the major difference between TAC-GAN and AC-GAN is that TAC-GAN conditions the generated images on text descriptions instead of on a class label. This design makes TAC-GAN more generic for image synthesis.
black For TAC-GAN, it imposes restrictions on generated images in both texts and class labels. The input vector of TAC-GAN's generative network is built based on a noise vector and embedded vector representation of textual descriptions. The discriminator of TAC-GAN is similar to that of the AC-GAN, which not only predicts whether the image is fake or not, but also predicts the label of the images. A minor difference of TAC-GAN's discriminator, compared to that of the AC-GAN, is that it also receives text information as input before performing its classification.
black The experiments and validations, on the Oxford-102 flowers dataset, show that the results produced by TAC-GAN are “slightly better” that other approaches, including GAN-INT-CLS and StackGAN.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: Text-SeGAN
In order to improve the diversity of the output images, both AC-GAN and TAC-GAN's discriminators predict class labels of the synthesised images. This process likely enforces the semantic diversity of the images, but class labels are inherently restrictive in describing image semantics, and images described by text can be matched to multiple labels. Therefore, instead of predicting images' class labels, an alternative solution is to directly quantify their semantic relevance.
black The architecture of Text-SeGAN is shown in Fig. FIGREF15(e). In order to directly quantify semantic relevance, Text-SeGAN BIBREF28 adds a regression layer to estimate the semantic relevance between the image and text instead of a classifier layer of predicting labels. The estimated semantic reference is a fractional value ranging between 0 and 1, with a higher value reflecting better semantic relevance between the image and text. Due to this unique design, an inherent advantage of Text-SeGAN is that the generated images are not limited to certain classes and are semantically matching to the text input.
black Experiments and validations, on Oxford-102 flower dataset, show that Text-SeGAN can generate diverse images that are semantically relevant to the input text. In addition, the results of Text-SeGAN show improved inception score compared to other approaches, including GAN-INT-CLS, StackGAN, TAC-GAN, and HDGAN.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: MirrorGAN and Scene Graph GAN
Due to the inherent complexity of the visual images, and the diversity of text descriptions (i.e. same words could imply different meanings), it is difficulty to precisely match the texts to the visual images at the semantic levels. For most methods we have discussed so far, they employ a direct text to image generation process, but there is no validation about how generated images comply with the text in a reverse fashion.
black To ensure the semantic consistency and diversity, MirrorGAN BIBREF60 employs a mirror structure, which reversely learns from generated images to output texts (an image-to-text process) to further validate whether generated are indeed consistent to the input texts. MirrowGAN includes three modules: a semantic text embedding module (STEM), a global-local collaborative attentive module for cascaded image generation (GLAM), and a semantic text regeneration and alignment module (STREAM). The back to back Text-to-Image (T2I) and Image-to-Text (I2T) are combined to progressively enhance the diversity and semantic consistency of the generated images.
black In order to enhance the diversity of the output image, Scene Graph GAN BIBREF61 proposes to use visual scene graphs to describe the layout of the objects, allowing users to precisely specific the relationships between objects in the images. In order to convert the visual scene graph as input for GAN to generate images, this method uses graph convolution to process input graphs. It computes a scene layout by predicting bounding boxes and segmentation masks for objects. After that, it converts the computed layout to an image with a cascaded refinement network.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs
Instead of focusing on generating static images, another line of text-to-image synthesis research focuses on generating videos (i.e. sequences of images) from texts. In this context, the synthesised videos are often useful resources for automated assistance or story telling.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs ::: ObamaNet and T2S
One early/interesting work of motion enhancement GANs is to generate spoofed speech and lip-sync videos (or talking face) of Barack Obama (i.e. ObamaNet) based on text input BIBREF62. This framework is consisted of three parts, i.e. text to speech using “Char2Wav”, mouth shape representation synced to the audio using a time-delayed LSTM and “video generation” conditioned on the mouth shape using “U-Net” architecture. Although the results seem promising, ObamaNet only models the mouth region and the videos are not generated from noise which can be regarded as video prediction other than video generation.
black Another meaningful trial of using synthesised videos for automated assistance is to translate spoken language (e.g. text) into sign language video sequences (i.e. T2S) BIBREF63. This is often achieved through a two step process: converting texts as meaningful units to generate images, followed by a learning component to arrange images into sequential order for best representation. More specifically, using RNN based machine translation methods, texts are translated into sign language gloss sequences. Then, glosses are mapped to skeletal pose sequences using a lookup-table. To generate videos, a conditional DCGAN with the input of concatenation of latent representation of the image for a base pose and skeletal pose information is built.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs ::: T2V
In BIBREF64, a text-to-video model (T2V) is proposed based on the cGAN in which the input is the isometric Gaussian noise with the text-gist vector served as the generator. A key component of generating videos from text is to train a conditional generative model to extract both static and dynamic information from text, followed by a hybrid framework combining a Variational Autoencoder (VAE) and a Generative Adversarial Network (GAN).
black More specifically, T2V relies on two types of features, static features and dynamic features, to generate videos. Static features, called “gist” are used to sketch text-conditioned background color and object layout structure. Dynamic features, on the other hand, are considered by transforming input text into an image filter which eventually forms the video generator which consists of three entangled neural networks. The text-gist vector is generated by a gist generator which maintains static information (e.g. background) and a text2filter which captures the dynamic information (i.e. actions) in the text to generate videos.
black As demonstrated in the paper BIBREF64, the generated videos are semantically related to the texts, but have a rather low quality (e.g. only $64 \times 64$ resolution).
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs ::: StoryGAN
Different from T2V which generates videos from a single text, StoryGAN aims to produce dynamic scenes consistent of specified texts (i.e. story written in a multi-sentence paragraph) using a sequential GAN model BIBREF65. Story encoder, context encoder, and discriminators are the main components of this model. By using stochastic sampling, the story encoder intends to learn an low-dimensional embedding vector for the whole story to keep the continuity of the story. The context encoder is proposed to capture contextual information during sequential image generation based on a deep RNN. Two discriminators of StoryGAN are image discriminator which evaluates the generated images and story discriminator which ensures the global consistency.
black The experiments and comparisons, on CLEVR dataset and Pororo cartoon dataset which are originally used for visual question answering, show that StoryGAN improves the generated video qualify in terms of Structural Similarity Index (SSIM), visual qualify, consistence, and relevance (the last three measure are based on human evaluation).
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Text-to-image Synthesis Applications
Computer vision applications have strong potential for industries including but not limited to the medical, government, military, entertainment, and online social media fields BIBREF7, BIBREF66, BIBREF67, BIBREF68, BIBREF69, BIBREF70. Text-to-image synthesis is one such application in computer vision AI that has become the main focus in recent years due to its potential for providing beneficial properties and opportunities for a wide range of applicable areas.
Text-to-image synthesis is an application byproduct of deep convolutional decoder networks in combination with GANs BIBREF7, BIBREF8, BIBREF10. Deep convolutional networks have contributed to several breakthroughs in image, video, speech, and audio processing. This learning method intends, among other possibilities, to help translate sequential text descriptions to images supplemented by one or many additional methods. Algorithms and methods developed in the computer vision field have allowed researchers in recent years to create realistic images from plain sentences. Advances in the computer vision, deep convolutional nets, and semantic units have shined light and redirected focus to this research area of text-to-image synthesis, having as its prime directive: to aid in the generation of compelling images with as much fidelity to text descriptions as possible.
To date, models for generating synthetic images from textual natural language in research laboratories at universities and private companies have yielded compelling images of flowers and birds BIBREF8. Though flowers and birds are the most common objects studied thus far, research has been applied to other classes as well. For example, there have been studies focused solely on human faces BIBREF7, BIBREF8, BIBREF71, BIBREF72.
It’s a fascinating time for computer vision AI and deep learning researchers and enthusiasts. The consistent advancement in hardware, software, and contemporaneous development of computer vision AI research disrupts multiple industries. These advances in technology allow for the extraction of several data types from a variety of sources. For example, image data captured from a variety of photo-ready devices, such as smart-phones, and online social media services opened the door to the analysis of large amounts of media datasets BIBREF70. The availability of large media datasets allow new frameworks and algorithms to be proposed and tested on real-world data.
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Text-to-image Synthesis Benchmark Datasets
A summary of some reviewed methods and benchmark datasets used for validation is reported in Table TABREF43. In addition, the performance of different GANs with respect to the benchmark datasets and performance metrics is reported in Table TABREF48.
In order to synthesize images from text descriptions, many frameworks have taken a minimalistic approach by creating small and background-less images BIBREF73. In most cases, the experiments were conducted on simple datasets, initially containing images of birds and flowers. BIBREF8 contributed to these data sets by adding corresponding natural language text descriptions to subsets of the CUB, MSCOCO, and Oxford-102 datasets, which facilitated the work on text-to-image synthesis for several papers released more recently.
While most deep learning algorithms use MNIST BIBREF74 dataset as the benchmark, there are three main datasets that are commonly used for evaluation of proposed GAN models for text-to-image synthesis: CUB BIBREF75, Oxford BIBREF76, COCO BIBREF77, and CIFAR-10 BIBREF78. CUB BIBREF75 contains 200 birds with matching text descriptions and Oxford BIBREF76 contains 102 categories of flowers with 40-258 images each and matching text descriptions. These datasets contain individual objects, with the text description corresponding to that object, making them relatively simple. COCO BIBREF77 is much more complex, containing 328k images with 91 different object types. CIFAI-10 BIBREF78 dataset consists of 60000 32$times$32 colour images in 10 classes, with 6000 images per class. In contrast to CUB and Oxford, whose images each contain an individual object, COCO’s images may contain multiple objects, each with a label, so there are many labels per image. The total number of labels over the 328k images is 2.5 million BIBREF77.
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Text-to-image Synthesis Benchmark Evaluation Metrics
Several evaluation metrics are used for judging the images produced by text-to-image GANs. Proposed by BIBREF25, Inception Scores (IS) calculates the entropy (randomness) of the conditional distribution, obtained by applying the Inception Model introduced in BIBREF79, and marginal distribution of a large set of generated images, which should be low and high, respectively, for meaningful images. Low entropy of conditional distribution means that the evaluator is confident that the images came from the data distribution, and high entropy of the marginal distribution means that the set of generated images is diverse, which are both desired features. The IS score is then computed as the KL-divergence between the two entropies. FCN-scores BIBREF2 are computed in a similar manner, relying on the intuition that realistic images generated by a GAN should be able to be classified correctly by a classifier trained on real images of the same distribution. Therefore, if the FCN classifier classifies a set of synthetic images accurately, the image is probably realistic, and the corresponding GAN gets a high FCN score. Frechet Inception Distance (FID) BIBREF80 is the other commonly used evaluation metric, and takes a different approach, actually comparing the generated images to real images in the distribution. A high FID means there is little relationship between statistics of the synthetic and real images and vice versa, so lower FIDs are better.
black The performance of different GANs with respect to the benchmark datasets and performance metrics is reported in Table TABREF48. In addition, Figure FIGREF49 further lists the performance of 14 GANs with respect to their Inception Scores (IS).
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: GAN Based Text-to-image Synthesis Results Comparison
While we gathered all the data we could find on scores for each model on the CUB, Oxford, and COCO datasets using IS, FID, FCN, and human classifiers, we unfortunately were unable to find certain data for AttnGAN and HDGAN (missing in Table TABREF48). The best evaluation we can give for those with missing data is our own opinions by looking at examples of generated images provided in their papers. In this regard, we observed that HDGAN produced relatively better visual results on the CUB and Oxford datasets while AttnGAN produced far more impressive results than the rest on the more complex COCO dataset. This is evidence that the attentional model and DAMSM introduced by AttnGAN are very effective in producing high-quality images. Examples of the best results of birds and plates of vegetables generated by each model are presented in Figures FIGREF50 and FIGREF51, respectively.
blackIn terms of inception score (IS), which is the metric that was applied to majority models except DC-GAN, the results in Table TABREF48 show that StackGAN++ only showed slight improvement over its predecessor, StackGAN, for text-to-image synthesis. However, StackGAN++ did introduce a very worthy enhancement for unconditional image generation by organizing the generators and discriminators in a “tree-like” structure. This indicates that revising the structures of the discriminators and/or generators can bring a moderate level of improvement in text-to-image synthesis.
blackIn addition, the results in Table TABREF48 also show that DM-GAN BIBREF53 has the best performance, followed by Obj-GAN BIBREF81. Notice that both DM-GAN and Obj-GAN are most recently developed methods in the field (both published in 2019), indicating that research in text to image synthesis is continuously improving the results for better visual perception and interception. Technical wise, DM-GAN BIBREF53 is a model using dynamic memory to refine fuzzy image contents initially generated from the GAN networks. A memory writing gate is used for DM-GAN to select important text information and generate images based on he selected text accordingly. On the other hand, Obj-GAN BIBREF81 focuses on object centered text-to-image synthesis. The proposed framework of Obj-GAN consists of a layout generation, including a bounding box generator and a shape generator, and an object-driven attentive image generator. The designs and advancement of DM-GAN and Obj-GAN indicate that research in text-to-image synthesis is advancing to put more emphasis on the image details and text semantics for better understanding and perception.
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Notable Mentions
It is worth noting that although this survey mainly focuses on text-to-image synthesis, there have been other applications of GANs in broader image synthesis field that we found fascinating and worth dedicating a small section to. For example, BIBREF72 used Sem-Latent GANs to generate images of faces based on facial attributes, producing impressive results that, at a glance, could be mistaken for real faces. BIBREF82, BIBREF70, and BIBREF83 demonstrated great success in generating text descriptions from images (image captioning) with great accuracy, with BIBREF82 using an attention-based model that automatically learns to focus on salient objects and BIBREF83 using deep visual-semantic alignments. Finally, there is a contribution made by StackGAN++ that was not mentioned in the dedicated section due to its relation to unconditional image generation as opposed to conditional, namely a color-regularization term BIBREF47. This additional term aims to keep the samples generated from the same input at different stages more consistent in color, which resulted in significantly better results for the unconditional model.
Conclusion
The recent advancement in text-to-image synthesis research opens the door to several compelling methods and architectures. The main objective of text-to-image synthesis initially was to create images from simple labels, and this objective later scaled to natural languages. In this paper, we reviewed novel methods that generate, in our opinion, the most visually-rich and photo-realistic images, from text-based natural language. These generated images often rely on generative adversarial networks (GANs), deep convolutional decoder networks, and multimodal learning methods.
blackIn the paper, we first proposed a taxonomy to organize GAN based text-to-image synthesis frameworks into four major groups: semantic enhancement GANs, resolution enhancement GANs, diversity enhancement GANs, and motion enhancement GANs. The taxonomy provides a clear roadmap to show the motivations, architectures, and difference of different methods, and also outlines their evolution timeline and relationships. Following the proposed taxonomy, we reviewed important features of each method and their architectures. We indicated the model definition and key contributions from some advanced GAN framworks, including StackGAN, StackGAN++, AttnGAN, DC-GAN, AC-GAN, TAC-GAN, HDGAN, Text-SeGAn, StoryGAN etc. Many of the solutions surveyed in this paper tackled the highly complex challenge of generating photo-realistic images beyond swatch size samples. In other words, beyond the work of BIBREF8 in which images were generated from text in 64$\times $64 tiny swatches. Lastly, all methods were evaluated on datasets that included birds, flowers, humans, and other miscellaneous elements. We were also able to allocate some important papers that were as impressive as the papers we finally surveyed. Though, these notable papers have yet to contribute directly or indirectly to the expansion of the vast computer vision AI field. Looking into the future, an excellent extension from the works surveyed in this paper would be to give more independence to the several learning methods (e.g. less human intervention) involved in the studies as well as increasing the size of the output images.
conflict of interest
The authors declare that there is no conflict of interest regarding the publication of this article. | give more independence to the several learning methods (e.g. less human intervention) involved in the studies, increasing the size of the output images |
e020677261d739c35c6f075cde6937d0098ace7f | e020677261d739c35c6f075cde6937d0098ace7f_0 | Q: What is the conclusion of comparison of proposed solution?
Text: Introduction
“ (GANs), and the variations that are now being proposed is the most interesting idea in the last 10 years in ML, in my opinion.” (2016)
– Yann LeCun
A picture is worth a thousand words! While written text provide efficient, effective, and concise ways for communication, visual content, such as images, is a more comprehensive, accurate, and intelligible method of information sharing and understanding. Generation of images from text descriptions, i.e. text-to-image synthesis, is a complex computer vision and machine learning problem that has seen great progress over recent years. Automatic image generation from natural language may allow users to describe visual elements through visually-rich text descriptions. The ability to do so effectively is highly desirable as it could be used in artificial intelligence applications such as computer-aided design, image editing BIBREF0, BIBREF1, game engines for the development of the next generation of video gamesBIBREF2, and pictorial art generation BIBREF3.
Introduction ::: blackTraditional Learning Based Text-to-image Synthesis
In the early stages of research, text-to-image synthesis was mainly carried out through a search and supervised learning combined process BIBREF4, as shown in Figure FIGREF4. In order to connect text descriptions to images, one could use correlation between keywords (or keyphrase) & images that identifies informative and “picturable” text units; then, these units would search for the most likely image parts conditioned on the text, eventually optimizing the picture layout conditioned on both the text and the image parts. Such methods often integrated multiple artificial intelligence key components, including natural language processing, computer vision, computer graphics, and machine learning.
The major limitation of the traditional learning based text-to-image synthesis approaches is that they lack the ability to generate new image content; they can only change the characteristics of the given/training images. Alternatively, research in generative models has advanced significantly and delivers solutions to learn from training images and produce new visual content. For example, Attribute2Image BIBREF5 models each image as a composite of foreground and background. In addition, a layered generative model with disentangled latent variables is learned, using a variational auto-encoder, to generate visual content. Because the learning is customized/conditioned by given attributes, the generative models of Attribute2Image can generate images with respect to different attributes, such as gender, hair color, age, etc., as shown in Figure FIGREF5.
Introduction ::: GAN Based Text-to-image Synthesis
Although generative model based text-to-image synthesis provides much more realistic image synthesis results, the image generation is still conditioned by the limited attributes. In recent years, several papers have been published on the subject of text-to-image synthesis. Most of the contributions from these papers rely on multimodal learning approaches that include generative adversarial networks and deep convolutional decoder networks as their main drivers to generate entrancing images from text BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11.
First introduced by Ian Goodfellow et al. BIBREF9, generative adversarial networks (GANs) consist of two neural networks paired with a discriminator and a generator. These two models compete with one another, with the generator attempting to produce synthetic/fake samples that will fool the discriminator and the discriminator attempting to differentiate between real (genuine) and synthetic samples. Because GANs' adversarial training aims to cause generators to produce images similar to the real (training) images, GANs can naturally be used to generate synthetic images (image synthesis), and this process can even be customized further by using text descriptions to specify the types of images to generate, as shown in Figure FIGREF6.
Much like text-to-speech and speech-to-text conversion, there exists a wide variety of problems that text-to-image synthesis could solve in the computer vision field specifically BIBREF8, BIBREF12. Nowadays, researchers are attempting to solve a plethora of computer vision problems with the aid of deep convolutional networks, generative adversarial networks, and a combination of multiple methods, often called multimodal learning methods BIBREF8. For simplicity, multiple learning methods will be referred to as multimodal learning hereafter BIBREF13. Researchers often describe multimodal learning as a method that incorporates characteristics from several methods, algorithms, and ideas. This can include ideas from two or more learning approaches in order to create a robust implementation to solve an uncommon problem or improve a solution BIBREF8, BIBREF14, BIBREF15, BIBREF16, BIBREF17.
black In this survey, we focus primarily on reviewing recent works that aim to solve the challenge of text-to-image synthesis using generative adversarial networks (GANs). In order to provide a clear roadmap, we propose a taxonomy to summarize reviewed GANs into four major categories. Our review will elaborate the motivations of methods in each category, analyze typical models, their network architectures, and possible drawbacks for further improvement. The visual abstract of the survey and the list of reviewed GAN frameworks is shown in Figure FIGREF8.
black The remainder of the survey is organized as follows. Section 2 presents a brief summary of existing works on subjects similar to that of this paper and highlights the key distinctions making ours unique. Section 3 gives a short introduction to GANs and some preliminary concepts related to image generation, as they are the engines that make text-to-image synthesis possible and are essential building blocks to achieve photo-realistic images from text descriptions. Section 4 proposes a taxonomy to summarize GAN based text-to-image synthesis, discusses models and architectures of novel works focused solely on text-to-image synthesis. This section will also draw key contributions from these works in relation to their applications. Section 5 reviews GAN based text-to-image synthesis benchmarks, performance metrics, and comparisons, including a simple review of GANs for other applications. In section 6, we conclude with a brief summary and outline ideas for future interesting developments in the field of text-to-image synthesis.
Related Work
With the growth and success of GANs, deep convolutional decoder networks, and multimodal learning methods, these techniques were some of the first procedures which aimed to solve the challenge of image synthesis. Many engineers and scientists in computer vision and AI have contributed through extensive studies and experiments, with numerous proposals and publications detailing their contributions. Because GANs, introduced by BIBREF9, are emerging research topics, their practical applications to image synthesis are still in their infancy. Recently, many new GAN architectures and designs have been proposed to use GANs for different applications, e.g. using GANs to generate sentimental texts BIBREF18, or using GANs to transform natural images into cartoons BIBREF19.
Although GANs are becoming increasingly popular, very few survey papers currently exist to summarize and outline contemporaneous technical innovations and contributions of different GAN architectures BIBREF20, BIBREF21. Survey papers specifically attuned to analyzing different contributions to text-to-image synthesis using GANs are even more scarce. We have thus found two surveys BIBREF6, BIBREF7 on image synthesis using GANs, which are the two most closely related publications to our survey objective. In the following paragraphs, we briefly summarize each of these surveys and point out how our objectives differ from theirs.
In BIBREF6, the authors provide an overview of image synthesis using GANs. In this survey, the authors discuss the motivations for research on image synthesis and introduce some background information on the history of GANs, including a section dedicated to core concepts of GANs, namely generators, discriminators, and the min-max game analogy, and some enhancements to the original GAN model, such as conditional GANs, addition of variational auto-encoders, etc.. In this survey, we will carry out a similar review of the background knowledge because the understanding of these preliminary concepts is paramount for the rest of the paper. Three types of approaches for image generation are reviewed, including direct methods (single generator and discriminator), hierarchical methods (two or more generator-discriminator pairs, each with a different goal), and iterative methods (each generator-discriminator pair generates a gradually higher-resolution image). Following the introduction, BIBREF6 discusses methods for text-to-image and image-to-image synthesis, respectively, and also describes several evaluation metrics for synthetic images, including inception scores and Frechet Inception Distance (FID), and explains the significance of the discriminators acting as learned loss functions as opposed to fixed loss functions.
Different from the above survey, which has a relatively broad scope in GANs, our objective is heavily focused on text-to-image synthesis. Although this topic, text-to-image synthesis, has indeed been covered in BIBREF6, they did so in a much less detailed fashion, mostly listing the many different works in a time-sequential order. In comparison, we will review several representative methods in the field and outline their models and contributions in detail.
Similarly to BIBREF6, the second survey paper BIBREF7 begins with a standard introduction addressing the motivation of image synthesis and the challenges it presents followed by a section dedicated to core concepts of GANs and enhancements to the original GAN model. In addition, the paper covers the review of two types of applications: (1) unconstrained applications of image synthesis such as super-resolution, image inpainting, etc., and (2) constrained image synthesis applications, namely image-to-image, text-to-image, and sketch-to image, and also discusses image and video editing using GANs. Again, the scope of this paper is intrinsically comprehensive, while we focus specifically on text-to-image and go into more detail regarding the contributions of novel state-of-the-art models.
Other surveys have been published on related matters, mainly related to the advancements and applications of GANs BIBREF22, BIBREF23, but we have not found any prior works which focus specifically on text-to-image synthesis using GANs. To our knowledge, this is the first paper to do so.
black
Preliminaries and Frameworks
In this section, we first introduce preliminary knowledge of GANs and one of its commonly used variants, conditional GAN (i.e. cGAN), which is the building block for many GAN based text-to-image synthesis models. After that, we briefly separate GAN based text-to-image synthesis into two types, Simple GAN frameworks vs. Advanced GAN frameworks, and discuss why advanced GAN architecture for image synthesis.
black Notice that the simple vs. advanced GAN framework separation is rather too brief, our taxonomy in the next section will propose a taxonomy to summarize advanced GAN frameworks into four categories, based on their objective and designs.
Preliminaries and Frameworks ::: Generative Adversarial Neural Network
Before moving on to a discussion and analysis of works applying GANs for text-to-image synthesis, there are some preliminary concepts, enhancements of GANs, datasets, and evaluation metrics that are present in some of the works described in the next section and are thus worth introducing.
As stated previously, GANs were introduced by Ian Goodfellow et al. BIBREF9 in 2014, and consist of two deep neural networks, a generator and a discriminator, which are trained independently with conflicting goals: The generator aims to generate samples closely related to the original data distribution and fool the discriminator, while the discriminator aims to distinguish between samples from the generator model and samples from the true data distribution by calculating the probability of the sample coming from either source. A conceptual view of the generative adversarial network (GAN) architecture is shown in Figure FIGREF11.
The training of GANs is an iterative process that, with each iteration, updates the generator and the discriminator with the goal of each defeating the other. leading each model to become increasingly adept at its specific task until a threshold is reached. This is analogous to a min-max game between the two models, according to the following equation:
In Eq. (DISPLAY_FORM10), $x$ denotes a multi-dimensional sample, e.g., an image, and $z$ denotes a multi-dimensional latent space vector, e.g., a multidimensional data point following a predefined distribution function such as that of normal distributions. $D_{\theta _d}()$ denotes a discriminator function, controlled by parameters $\theta _d$, which aims to classify a sample into a binary space. $G_{\theta _g}()$ denotes a generator function, controlled by parameters $\theta _g$, which aims to generate a sample from some latent space vector. For example, $G_{\theta _g}(z)$ means using a latent vector $z$ to generate a synthetic/fake image, and $D_{\theta _d}(x)$ means to classify an image $x$ as binary output (i.e. true/false or 1/0). In the GAN setting, the discriminator $D_{\theta _d}()$ is learned to distinguish a genuine/true image (labeled as 1) from fake images (labeled as 0). Therefore, given a true image $x$, the ideal output from the discriminator $D_{\theta _d}(x)$ would be 1. Given a fake image generated from the generator $G_{\theta _g}(z)$, the ideal prediction from the discriminator $D_{\theta _d}(G_{\theta _g}(z))$ would be 0, indicating the sample is a fake image.
Following the above definition, the $\min \max $ objective function in Eq. (DISPLAY_FORM10) aims to learn parameters for the discriminator ($\theta _d$) and generator ($\theta _g$) to reach an optimization goal: The discriminator intends to differentiate true vs. fake images with maximum capability $\max _{\theta _d}$ whereas the generator intends to minimize the difference between a fake image vs. a true image $\min _{\theta _g}$. In other words, the discriminator sets the characteristics and the generator produces elements, often images, iteratively until it meets the attributes set forth by the discriminator. GANs are often used with images and other visual elements and are notoriously efficient in generating compelling and convincing photorealistic images. Most recently, GANs were used to generate an original painting in an unsupervised fashion BIBREF24. The following sections go into further detail regarding how the generator and discriminator are trained in GANs.
Generator - In image synthesis, the generator network can be thought of as a mapping from one representation space (latent space) to another (actual data) BIBREF21. When it comes to image synthesis, all of the images in the data space fall into some distribution in a very complex and high-dimensional feature space. Sampling from such a complex space is very difficult, so GANs instead train a generator to create synthetic images from a much more simple feature space (usually random noise) called the latent space. The generator network performs up-sampling of the latent space and is usually a deep neural network consisting of several convolutional and/or fully connected layers BIBREF21. The generator is trained using gradient descent to update the weights of the generator network with the aim of producing data (in our case, images) that the discriminator classifies as real.
Discriminator - The discriminator network can be thought of as a mapping from image data to the probability of the image coming from the real data space, and is also generally a deep neural network consisting of several convolution and/or fully connected layers. However, the discriminator performs down-sampling as opposed to up-sampling. Like the generator, it is trained using gradient descent but its goal is to update the weights so that it is more likely to correctly classify images as real or fake.
In GANs, the ideal outcome is for both the generator's and discriminator's cost functions to converge so that the generator produces photo-realistic images that are indistinguishable from real data, and the discriminator at the same time becomes an expert at differentiating between real and synthetic data. This, however, is not possible since a reduction in cost of one model generally leads to an increase in cost of the other. This phenomenon makes training GANs very difficult, and training them simultaneously (both models performing gradient descent in parallel) often leads to a stable orbit where neither model is able to converge. To combat this, the generator and discriminator are often trained independently. In this case, the GAN remains the same, but there are different training stages. In one stage, the weights of the generator are kept constant and gradient descent updates the weights of the discriminator, and in the other stage the weights of the discriminator are kept constant while gradient descent updates the weights of the generator. This is repeated for some number of epochs until a desired low cost for each model is reached BIBREF25.
Preliminaries and Frameworks ::: cGAN: Conditional GAN
Conditional Generative Adversarial Networks (cGAN) are an enhancement of GANs proposed by BIBREF26 shortly after the introduction of GANs by BIBREF9. The objective function of the cGAN is defined in Eq. (DISPLAY_FORM13) which is very similar to the GAN objective function in Eq. (DISPLAY_FORM10) except that the inputs to both discriminator and generator are conditioned by a class label $y$.
The main technical innovation of cGAN is that it introduces an additional input or inputs to the original GAN model, allowing the model to be trained on information such as class labels or other conditioning variables as well as the samples themselves, concurrently. Whereas the original GAN was trained only with samples from the data distribution, resulting in the generated sample reflecting the general data distribution, cGAN enables directing the model to generate more tailored outputs.
In Figure FIGREF14, the condition vector is the class label (text string) "Red bird", which is fed to both the generator and discriminator. It is important, however, that the condition vector is related to the real data. If the model in Figure FIGREF14 was trained with the same set of real data (red birds) but the condition text was "Yellow fish", the generator would learn to create images of red birds when conditioned with the text "Yellow fish".
Note that the condition vector in cGAN can come in many forms, such as texts, not just limited to the class label. Such a unique design provides a direct solution to generate images conditioned by predefined specifications. As a result, cGAN has been used in text-to-image synthesis since the very first day of its invention although modern approaches can deliver much better text-to-image synthesis results.
black
Preliminaries and Frameworks ::: Simple GAN Frameworks for Text-to-Image Synthesis
In order to generate images from text, one simple solution is to employ the conditional GAN (cGAN) designs and add conditions to the training samples, such that the GAN is trained with respect to the underlying conditions. Several pioneer works have followed similar designs for text-to-image synthesis.
black An essential disadvantage of using cGAN for text-to-image synthesis is that that it cannot handle complicated textual descriptions for image generation, because cGAN uses labels as conditions to restrict the GAN inputs. If the text inputs have multiple keywords (or long text descriptions) they cannot be used simultaneously to restrict the input. Instead of using text as conditions, another two approaches BIBREF8, BIBREF16 use text as input features, and concatenate such features with other features to train discriminator and generator, as shown in Figure FIGREF15(b) and (c). To ensure text being used as GAN input, a feature embedding or feature representation learning BIBREF29, BIBREF30 function $\varphi ()$ is often introduced to convert input text as numeric features, which are further concatenated with other features to train GANs.
black
Preliminaries and Frameworks ::: Advanced GAN Frameworks for Text-to-Image Synthesis
Motivated by the GAN and conditional GAN (cGAN) design, many GAN based frameworks have been proposed to generate images, with different designs and architectures, such as using multiple discriminators, using progressively trained discriminators, or using hierarchical discriminators. Figure FIGREF17 outlines several advanced GAN frameworks in the literature. In addition to these frameworks, many news designs are being proposed to advance the field with rather sophisticated designs. For example, a recent work BIBREF37 proposes to use a pyramid generator and three independent discriminators, blackeach focusing on a different aspect of the images, to lead the generator towards creating images that are photo-realistic on multiple levels. Another recent publication BIBREF38 proposes to use discriminator to measure semantic relevance between image and text instead of class prediction (like most discriminator in GANs does), resulting a new GAN structure outperforming text conditioned auxiliary classifier (TAC-GAN) BIBREF16 and generating diverse, realistic, and relevant to the input text regardless of class.
black In the following section, we will first propose a taxonomy that summarizes advanced GAN frameworks for text-to-image synthesis, and review most recent proposed solutions to the challenge of generating photo-realistic images conditioned on natural language text descriptions using GANs. The solutions we discuss are selected based on relevance and quality of contributions. Many publications exist on the subject of image-generation using GANs, but in this paper we focus specifically on models for text-to-image synthesis, with the review emphasizing on the “model” and “contributions” for text-to-image synthesis. At the end of this section, we also briefly review methods using GANs for other image-synthesis applications.
black
Text-to-Image Synthesis Taxonomy and Categorization
In this section, we propose a taxonomy to summarize advanced GAN based text-to-image synthesis frameworks, as shown in Figure FIGREF24. The taxonomy organizes GAN frameworks into four categories, including Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANs, and Motion Enhancement GAGs. Following the proposed taxonomy, each subsection will introduce several typical frameworks and address their techniques of using GANS to solve certain aspects of the text-to-mage synthesis challenges.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: GAN based Text-to-Image Synthesis Taxonomy
Although the ultimate goal of Text-to-Image synthesis is to generate images closely related to the textual descriptions, the relevance of the images to the texts are often validated from different perspectives, due to the inherent diversity of human perceptions. For example, when generating images matching to the description “rose flowers”, some users many know the exact type of flowers they like and intend to generate rose flowers with similar colors. Other users, may seek to generate high quality rose flowers with a nice background (e.g. garden). The third group of users may be more interested in generating flowers similar to rose but with different colors and visual appearance, e.g. roses, begonia, and peony. The fourth group of users may want to not only generate flower images, but also use them to form a meaningful action, e.g. a video clip showing flower growth, performing a magic show using those flowers, or telling a love story using the flowers.
blackFrom the text-to-Image synthesis point of view, the first group of users intend to precisely control the semantic of the generated images, and their goal is to match the texts and images at the semantic level. The second group of users are more focused on the resolutions and the qualify of the images, in addition to the requirement that the images and texts are semantically related. For the third group of users, their goal is to diversify the output images, such that their images carry diversified visual appearances and are also semantically related. The fourth user group adds a new dimension in image synthesis, and aims to generate sequences of images which are coherent in temporal order, i.e. capture the motion information.
black Based on the above descriptions, we categorize GAN based Text-to-Image Synthesis into a taxonomy with four major categories, as shown in Fig. FIGREF24.
Semantic Enhancement GANs: Semantic enhancement GANs represent pioneer works of GAN frameworks for text-to-image synthesis. The main focus of the GAN frameworks is to ensure that the generated images are semantically related to the input texts. This objective is mainly achieved by using a neural network to encode texts as dense features, which are further fed to a second network to generate images matching to the texts.
Resolution Enhancement GANs: Resolution enhancement GANs mainly focus on generating high qualify images which are semantically matched to the texts. This is mainly achieved through a multi-stage GAN framework, where the outputs from earlier stage GANs are fed to the second (or later) stage GAN to generate better qualify images.
Diversity Enhancement GANs: Diversity enhancement GANs intend to diversify the output images, such that the generated images are not only semantically related but also have different types and visual appearance. This objective is mainly achieved through an additional component to estimate semantic relevance between generated images and texts, in order to maximize the output diversity.
Motion Enhancement GANs: Motion enhancement GANs intend to add a temporal dimension to the output images, such that they can form meaningful actions with respect to the text descriptions. This goal mainly achieved though a two-step process which first generates images matching to the “actions” of the texts, followed by a mapping or alignment procedure to ensure that images are coherent in the temporal order.
black In the following, we will introduce how these GAN frameworks evolve for text-to-image synthesis, and will also review some typical methods of each category.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs
Semantic relevance is one the of most important criteria of the text-to-image synthesis. For most GNAs discussed in this survey, they are required to generate images semantically related to the text descriptions. However, the semantic relevance is a rather subjective measure, and images are inherently rich in terms of its semantics and interpretations. Therefore, many GANs are further proposed to enhance the text-to-image synthesis from different perspectives. In this subsection, we will review several classical approaches which are commonly served as text-to-image synthesis baseline.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs ::: DC-GAN
Deep convolution generative adversarial network (DC-GAN) BIBREF8 represents the pioneer work for text-to-image synthesis using GANs. Its main goal is to train a deep convolutional generative adversarial network (DC-GAN) on text features. During this process these text features are encoded by another neural network. This neural network is a hybrid convolutional recurrent network at the character level. Concurrently, both neural networks have also feed-forward inference in the way they condition text features. Generating realistic images automatically from natural language text is the motivation of several of the works proposed in this computer vision field. However, actual artificial intelligence (AI) systems are far from achieving this task BIBREF8, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. Lately, recurrent neural networks led the way to develop frameworks that learn discriminatively on text features. At the same time, generative adversarial networks (GANs) began recently to show some promise on generating compelling images of a whole host of elements including but not limited to faces, birds, flowers, and non-common images such as room interiorsBIBREF8. DC-GAN is a multimodal learning model that attempts to bridge together both of the above mentioned unsupervised machine learning algorithms, the recurrent neural networks (RNN) and generative adversarial networks (GANs), with the sole purpose of speeding the generation of text-to-image synthesis.
black Deep learning shed some light to some of the most sophisticated advances in natural language representation, image synthesis BIBREF7, BIBREF8, BIBREF43, BIBREF35, and classification of generic data BIBREF44. However, a bulk of the latest breakthroughs in deep learning and computer vision were related to supervised learning BIBREF8. Even though natural language and image synthesis were part of several contributions on the supervised side of deep learning, unsupervised learning saw recently a tremendous rise in input from the research community specially on two subproblems: text-based natural language and image synthesis BIBREF45, BIBREF14, BIBREF8, BIBREF46, BIBREF47. These subproblems are typically subdivided as focused research areas. DC-GAN's contributions are mainly driven by these two research areas. In order to generate plausible images from natural language, DC-GAN contributions revolve around developing a straightforward yet effective GAN architecture and training strategy that allows natural text to image synthesis. These contributions are primarily tested on the Caltech-UCSD Birds and Oxford-102 Flowers datasets. Each image in these datasets carry five text descriptions. These text descriptions were created by the research team when setting up the evaluation environment. The DC-GANs model is subsequently trained on several subcategories. Subcategories in this research represent the training and testing sub datasets. The performance shown by these experiments display a promising yet effective way to generate images from textual natural language descriptions BIBREF8.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs ::: DC-GAN Extensions
Following the pioneer DC-GAN framework BIBREF8, many researches propose revised network structures (e.g. different discriminaotrs) in order to improve images with better semantic relevance to the texts. Based on the deep convolutional adversarial network (DC-GAN) network architecture, GAN-CLS with image-text matching discriminator, GAN-INT learned with text manifold interpolation and GAN-INT-CLS which combines both are proposed to find semantic match between text and image. Similar to the DC-GAN architecture, an adaptive loss function (i.e. Perceptual Loss BIBREF48) is proposed for semantic image synthesis which can synthesize a realistic image that not only matches the target text description but also keep the irrelavant features(e.g. background) from source images BIBREF49. Regarding to the Perceptual Losses, three loss functions (i.e. Pixel reconstruction loss, Activation reconstruction loss and Texture reconstruction loss) are proposed in BIBREF50 in which they construct the network architectures based on the DC-GAN, i.e. GAN-INT-CLS-Pixel, GAN-INT-CLS-VGG and GAN-INT-CLS-Gram with respect to three losses. In BIBREF49, a residual transformation unit is added in the network to retain similar structure of the source image.
black Following the BIBREF49 and considering the features in early layers address background while foreground is obtained in latter layers in CNN, a pair of discriminators with different architectures (i.e. Paired-D GAN) is proposed to synthesize background and foreground from a source image seperately BIBREF51. Meanwhile, the skip-connection in the generator is employed to more precisely retain background information in the source image.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs ::: MC-GAN
When synthesising images, most text-to-image synthesis methods consider each output image as one single unit to characterize its semantic relevance to the texts. This is likely problematic because most images naturally consist of two crucial components: foreground and background. Without properly separating these two components, it's hard to characterize the semantics of an image if the whole image is treated as a single unit without proper separation.
black In order to enhance the semantic relevance of the images, a multi-conditional GAN (MC-GAN) BIBREF52 is proposed to synthesize a target image by combining the background of a source image and a text-described foreground object which does not exist in the source image. A unique feature of MC-GAN is that it proposes a synthesis block in which the background feature is extracted from the given image without non-linear function (i.e. only using convolution and batch normalization) and the foreground feature is the feature map from the previous layer.
black Because MC-GAN is able to properly model the background and foreground of the generated images, a unique strength of MC-GAN is that users are able to provide a base image and MC-GAN is able to preserve the background information of the base image to generate new images. black
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs
Due to the fact that training GANs will be much difficult when generating high-resolution images, a two stage GAN (i.e. stackGAN) is proposed in which rough images(i.e. low-resolution images) are generated in stage-I and refined in stage-II. To further improve the quality of generated images, the second version of StackGAN (i.e. Stack++) is proposed to use multi-stage GANs to generate multi-scale images. A color-consistency regularization term is also added into the loss to keep the consistency of images in different scales.
black While stackGAN and StackGAN++ are both built on the global sentence vector, AttnGAN is proposed to use attention mechanism (i.e. Deep Attentional Multimodal Similarity Model (DAMSM)) to model the multi-level information (i.e. word level and sentence level) into GANs. In the following, StackGAN, StackGAN++ and AttnGAN will be explained in detail.
black Recently, Dynamic Memory Generative Adversarial Network (i.e. DM-GAN)BIBREF53 which uses a dynamic memory component is proposed to focus on refiningthe initial generated image which is the key to the success of generating high quality images.
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: StackGAN
In 2017, Zhang et al. proposed a model for generating photo-realistic images from text descriptions called StackGAN (Stacked Generative Adversarial Network) BIBREF33. In their work, they define a two-stage model that uses two cascaded GANs, each corresponding to one of the stages. The stage I GAN takes a text description as input, converts the text description to a text embedding containing several conditioning variables, and generates a low-quality 64x64 image with rough shapes and colors based on the computed conditioning variables. The stage II GAN then takes this low-quality stage I image as well as the same text embedding and uses the conditioning variables to correct and add more detail to the stage I result. The output of stage II is a photorealistic 256$times$256 image that resembles the text description with compelling accuracy.
One major contribution of StackGAN is the use of cascaded GANs for text-to-image synthesis through a sketch-refinement process. By conditioning the stage II GAN on the image produced by the stage I GAN and text description, the stage II GAN is able to correct defects in the stage I output, resulting in high-quality 256x256 images. Prior works have utilized “stacked” GANs to separate the image generation process into structure and style BIBREF42, multiple stages each generating lower-level representations from higher-level representations of the previous stage BIBREF35, and multiple stages combined with a laplacian pyramid approach BIBREF54, which was introduced for image compression by P. Burt and E. Adelson in 1983 and uses the differences between consecutive down-samples of an original image to reconstruct the original image from its down-sampled version BIBREF55. However, these works did not use text descriptions to condition their generator models.
Conditioning Augmentation is the other major contribution of StackGAN. Prior works transformed the natural language text description into a fixed text embedding containing static conditioning variables which were fed to the generator BIBREF8. StackGAN does this and then creates a Gaussian distribution from the text embedding and randomly selects variables from the Gaussian distribution to add to the set of conditioning variables during training. This encourages robustness by introducing small variations to the original text embedding for a particular training image while keeping the training image that the generated output is compared to the same. The result is that the trained model produces more diverse images in the same distribution when using Conditioning Augmentation than the same model using a fixed text embedding BIBREF33.
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: StackGAN++
Proposed by the same users as StackGAN, StackGAN++ is also a stacked GAN model, but organizes the generators and discriminators in a “tree-like” structure BIBREF47 with multiple stages. The first stage combines a noise vector and conditioning variables (with Conditional Augmentation introduced in BIBREF33) for input to the first generator, which generates a low-resolution image, 64$\times $64 by default (this can be changed depending on the desired number of stages). Each following stage uses the result from the previous stage and the conditioning variables to produce gradually higher-resolution images. These stages do not use the noise vector again, as the creators assume that the randomness it introduces is already preserved in the output of the first stage. The final stage produces a 256$\times $256 high-quality image.
StackGAN++ introduces the joint conditional and unconditional approximation in their designs BIBREF47. The discriminators are trained to calculate the loss between the image produced by the generator and the conditioning variables (measuring how accurately the image represents the description) as well as the loss between the image and real images (probability of the image being real or fake). The generators then aim to minimize the sum of these losses, improving the final result.
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: AttnGAN
Attentional Generative Adversarial Network (AttnGAN) BIBREF10 is very similar, in terms of its structure, to StackGAN++ BIBREF47, discussed in the previous section, but some novel components are added. Like previous works BIBREF56, BIBREF8, BIBREF33, BIBREF47, a text encoder generates a text embedding with conditioning variables based on the overall sentence. Additionally, the text encoder generates a separate text embedding with conditioning variables based on individual words. This process is optimized to produce meaningful variables using a bidirectional recurrent neural network (BRNN), more specifically bidirectional Long Short Term Memory (LSTM) BIBREF57, which, for each word in the description, generates conditions based on the previous word as well as the next word (bidirectional). The first stage of AttnGAN generates a low-resolution image based on the sentence-level text embedding and random noise vector. The output is fed along with the word-level text embedding to an “attention model”, which matches the word-level conditioning variables to regions of the stage I image, producing a word-context matrix. This is then fed to the next stage of the model along with the raw previous stage output. Each consecutive stage works in the same manner, but produces gradually higher-resolution images conditioned on the previous stage.
Two major contributions were introduced in AttnGAN: the attentional generative network and the Deep Attentional Multimodal Similarity Model (DAMSM) BIBREF47. The attentional generative network matches specific regions of each stage's output image to conditioning variables from the word-level text embedding. This is a very worthy contribution, allowing each consecutive stage to focus on specific regions of the image independently, adding “attentional” details region by region as opposed to the whole image. The DAMSM is also a key feature introduced by AttnGAN, which is used after the result of the final stage to calculate the similarity between the generated image and the text embedding at both the sentence level and the more fine-grained word level. Table TABREF48 shows scores from different metrics for StackGAN, StackGAN++, AttnGAN, and HDGAN on the CUB, Oxford, and COCO datasets. The table shows that AttnGAN outperforms the other models in terms of IS on the CUB dataset by a small amount and greatly outperforms them on the COCO dataset.
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: HDGAN
Hierarchically-nested adversarial network (HDGAN) is a method proposed by BIBREF36, and its main objective is to tackle the difficult problem of dealing with photographic images from semantic text descriptions. These semantic text descriptions are applied on images from diverse datasets. This method introduces adversarial objectives nested inside hierarchically oriented networks BIBREF36. Hierarchical networks helps regularize mid-level manifestations. In addition to regularize mid-level manifestations, it assists the training of the generator in order to capture highly complex still media elements. These elements are captured in statistical order to train the generator based on settings extracted directly from the image. The latter is an ideal scenario. However, this paper aims to incorporate a single-stream architecture. This single-stream architecture functions as the generator that will form an optimum adaptability towards the jointed discriminators. Once jointed discriminators are setup in an optimum manner, the single-stream architecture will then advance generated images to achieve a much higher resolution BIBREF36.
The main contributions of the HDGANs include the introduction of a visual-semantic similarity measure BIBREF36. This feature will aid in the evaluation of the consistency of generated images. In addition to checking the consistency of generated images, one of the key objectives of this step is to test the logical consistency of the end product BIBREF36. The end product in this case would be images that are semantically mapped from text-based natural language descriptions to each area on the picture e.g. a wing on a bird or petal on a flower. Deep learning has created a multitude of opportunities and challenges for researchers in the computer vision AI field. Coupled with GAN and multimodal learning architectures, this field has seen tremendous growth BIBREF8, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. Based on these advancements, HDGANs attempt to further extend some desirable and less common features when generating images from textual natural language BIBREF36. In other words, it takes sentences and treats them as a hierarchical structure. This has some positive and negative implications in most cases. For starters, it makes it more complex to generate compelling images. However, one of the key benefits of this elaborate process is the realism obtained once all processes are completed. In addition, one common feature added to this process is the ability to identify parts of sentences with bounding boxes. If a sentence includes common characteristics of a bird, it will surround the attributes of such bird with bounding boxes. In practice, this should happen if the desired image have other elements such as human faces (e.g. eyes, hair, etc), flowers (e.g. petal size, color, etc), or any other inanimate object (e.g. a table, a mug, etc). Finally, HDGANs evaluated some of its claims on common ideal text-to-image datasets such as CUB, COCO, and Oxford-102 BIBREF8, BIBREF36, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. These datasets were first utilized on earlier works BIBREF8, and most of them sport modified features such image annotations, labels, or descriptions. The qualitative and quantitative results reported by researchers in this study were far superior of earlier works in this same field of computer vision AI.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs
In this subsection, we introduce text-to-image synthesis methods which try to maximize the diversity of the output images, based on the text descriptions.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: AC-GAN
Two issues arise in the traditional GANs BIBREF58 for image synthesis: (1) scalabilirty problem: traditional GANs cannot predict a large number of image categories; and (2) diversity problem: images are often subject to one-to-many mapping, so one image could be labeled as different tags or being described using different texts. To address these problems, GAN conditioned on additional information, e.g. cGAN, is an alternative solution. However, although cGAN and many previously introduced approaches are able to generate images with respect to the text descriptions, they often output images with similar types and visual appearance.
black Slightly different from the cGAN, auxiliary classifier GANs (AC-GAN) BIBREF27 proposes to improve the diversity of output images by using an auxiliary classifier to control output images. The overall structure of AC-GAN is shown in Fig. FIGREF15(c). In AC-GAN, every generated image is associated with a class label, in addition to the true/fake label which are commonly used in GAN or cGAN. The discriminator of AC-GAN not only outputs a probability distribution over sources (i.e. whether the image is true or fake), it also output a probability distribution over the class label (i.e. predict which class the image belong to).
black By using an auxiliary classifier layer to predict the class of the image, AC-GAN is able to use the predicted class labels of the images to ensure that the output consists of images from different classes, resulting in diversified synthesis images. The results show that AC-GAN can generate images with high diversity.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: TAC-GAN
Building on the AC-GAN, TAC-GAN BIBREF59 is proposed to replace the class information with textual descriptions as the input to perform the task of text to image synthesis. The architecture of TAC-GAN is shown in Fig. FIGREF15(d), which is similar to AC-GAN. Overall, the major difference between TAC-GAN and AC-GAN is that TAC-GAN conditions the generated images on text descriptions instead of on a class label. This design makes TAC-GAN more generic for image synthesis.
black For TAC-GAN, it imposes restrictions on generated images in both texts and class labels. The input vector of TAC-GAN's generative network is built based on a noise vector and embedded vector representation of textual descriptions. The discriminator of TAC-GAN is similar to that of the AC-GAN, which not only predicts whether the image is fake or not, but also predicts the label of the images. A minor difference of TAC-GAN's discriminator, compared to that of the AC-GAN, is that it also receives text information as input before performing its classification.
black The experiments and validations, on the Oxford-102 flowers dataset, show that the results produced by TAC-GAN are “slightly better” that other approaches, including GAN-INT-CLS and StackGAN.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: Text-SeGAN
In order to improve the diversity of the output images, both AC-GAN and TAC-GAN's discriminators predict class labels of the synthesised images. This process likely enforces the semantic diversity of the images, but class labels are inherently restrictive in describing image semantics, and images described by text can be matched to multiple labels. Therefore, instead of predicting images' class labels, an alternative solution is to directly quantify their semantic relevance.
black The architecture of Text-SeGAN is shown in Fig. FIGREF15(e). In order to directly quantify semantic relevance, Text-SeGAN BIBREF28 adds a regression layer to estimate the semantic relevance between the image and text instead of a classifier layer of predicting labels. The estimated semantic reference is a fractional value ranging between 0 and 1, with a higher value reflecting better semantic relevance between the image and text. Due to this unique design, an inherent advantage of Text-SeGAN is that the generated images are not limited to certain classes and are semantically matching to the text input.
black Experiments and validations, on Oxford-102 flower dataset, show that Text-SeGAN can generate diverse images that are semantically relevant to the input text. In addition, the results of Text-SeGAN show improved inception score compared to other approaches, including GAN-INT-CLS, StackGAN, TAC-GAN, and HDGAN.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: MirrorGAN and Scene Graph GAN
Due to the inherent complexity of the visual images, and the diversity of text descriptions (i.e. same words could imply different meanings), it is difficulty to precisely match the texts to the visual images at the semantic levels. For most methods we have discussed so far, they employ a direct text to image generation process, but there is no validation about how generated images comply with the text in a reverse fashion.
black To ensure the semantic consistency and diversity, MirrorGAN BIBREF60 employs a mirror structure, which reversely learns from generated images to output texts (an image-to-text process) to further validate whether generated are indeed consistent to the input texts. MirrowGAN includes three modules: a semantic text embedding module (STEM), a global-local collaborative attentive module for cascaded image generation (GLAM), and a semantic text regeneration and alignment module (STREAM). The back to back Text-to-Image (T2I) and Image-to-Text (I2T) are combined to progressively enhance the diversity and semantic consistency of the generated images.
black In order to enhance the diversity of the output image, Scene Graph GAN BIBREF61 proposes to use visual scene graphs to describe the layout of the objects, allowing users to precisely specific the relationships between objects in the images. In order to convert the visual scene graph as input for GAN to generate images, this method uses graph convolution to process input graphs. It computes a scene layout by predicting bounding boxes and segmentation masks for objects. After that, it converts the computed layout to an image with a cascaded refinement network.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs
Instead of focusing on generating static images, another line of text-to-image synthesis research focuses on generating videos (i.e. sequences of images) from texts. In this context, the synthesised videos are often useful resources for automated assistance or story telling.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs ::: ObamaNet and T2S
One early/interesting work of motion enhancement GANs is to generate spoofed speech and lip-sync videos (or talking face) of Barack Obama (i.e. ObamaNet) based on text input BIBREF62. This framework is consisted of three parts, i.e. text to speech using “Char2Wav”, mouth shape representation synced to the audio using a time-delayed LSTM and “video generation” conditioned on the mouth shape using “U-Net” architecture. Although the results seem promising, ObamaNet only models the mouth region and the videos are not generated from noise which can be regarded as video prediction other than video generation.
black Another meaningful trial of using synthesised videos for automated assistance is to translate spoken language (e.g. text) into sign language video sequences (i.e. T2S) BIBREF63. This is often achieved through a two step process: converting texts as meaningful units to generate images, followed by a learning component to arrange images into sequential order for best representation. More specifically, using RNN based machine translation methods, texts are translated into sign language gloss sequences. Then, glosses are mapped to skeletal pose sequences using a lookup-table. To generate videos, a conditional DCGAN with the input of concatenation of latent representation of the image for a base pose and skeletal pose information is built.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs ::: T2V
In BIBREF64, a text-to-video model (T2V) is proposed based on the cGAN in which the input is the isometric Gaussian noise with the text-gist vector served as the generator. A key component of generating videos from text is to train a conditional generative model to extract both static and dynamic information from text, followed by a hybrid framework combining a Variational Autoencoder (VAE) and a Generative Adversarial Network (GAN).
black More specifically, T2V relies on two types of features, static features and dynamic features, to generate videos. Static features, called “gist” are used to sketch text-conditioned background color and object layout structure. Dynamic features, on the other hand, are considered by transforming input text into an image filter which eventually forms the video generator which consists of three entangled neural networks. The text-gist vector is generated by a gist generator which maintains static information (e.g. background) and a text2filter which captures the dynamic information (i.e. actions) in the text to generate videos.
black As demonstrated in the paper BIBREF64, the generated videos are semantically related to the texts, but have a rather low quality (e.g. only $64 \times 64$ resolution).
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs ::: StoryGAN
Different from T2V which generates videos from a single text, StoryGAN aims to produce dynamic scenes consistent of specified texts (i.e. story written in a multi-sentence paragraph) using a sequential GAN model BIBREF65. Story encoder, context encoder, and discriminators are the main components of this model. By using stochastic sampling, the story encoder intends to learn an low-dimensional embedding vector for the whole story to keep the continuity of the story. The context encoder is proposed to capture contextual information during sequential image generation based on a deep RNN. Two discriminators of StoryGAN are image discriminator which evaluates the generated images and story discriminator which ensures the global consistency.
black The experiments and comparisons, on CLEVR dataset and Pororo cartoon dataset which are originally used for visual question answering, show that StoryGAN improves the generated video qualify in terms of Structural Similarity Index (SSIM), visual qualify, consistence, and relevance (the last three measure are based on human evaluation).
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Text-to-image Synthesis Applications
Computer vision applications have strong potential for industries including but not limited to the medical, government, military, entertainment, and online social media fields BIBREF7, BIBREF66, BIBREF67, BIBREF68, BIBREF69, BIBREF70. Text-to-image synthesis is one such application in computer vision AI that has become the main focus in recent years due to its potential for providing beneficial properties and opportunities for a wide range of applicable areas.
Text-to-image synthesis is an application byproduct of deep convolutional decoder networks in combination with GANs BIBREF7, BIBREF8, BIBREF10. Deep convolutional networks have contributed to several breakthroughs in image, video, speech, and audio processing. This learning method intends, among other possibilities, to help translate sequential text descriptions to images supplemented by one or many additional methods. Algorithms and methods developed in the computer vision field have allowed researchers in recent years to create realistic images from plain sentences. Advances in the computer vision, deep convolutional nets, and semantic units have shined light and redirected focus to this research area of text-to-image synthesis, having as its prime directive: to aid in the generation of compelling images with as much fidelity to text descriptions as possible.
To date, models for generating synthetic images from textual natural language in research laboratories at universities and private companies have yielded compelling images of flowers and birds BIBREF8. Though flowers and birds are the most common objects studied thus far, research has been applied to other classes as well. For example, there have been studies focused solely on human faces BIBREF7, BIBREF8, BIBREF71, BIBREF72.
It’s a fascinating time for computer vision AI and deep learning researchers and enthusiasts. The consistent advancement in hardware, software, and contemporaneous development of computer vision AI research disrupts multiple industries. These advances in technology allow for the extraction of several data types from a variety of sources. For example, image data captured from a variety of photo-ready devices, such as smart-phones, and online social media services opened the door to the analysis of large amounts of media datasets BIBREF70. The availability of large media datasets allow new frameworks and algorithms to be proposed and tested on real-world data.
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Text-to-image Synthesis Benchmark Datasets
A summary of some reviewed methods and benchmark datasets used for validation is reported in Table TABREF43. In addition, the performance of different GANs with respect to the benchmark datasets and performance metrics is reported in Table TABREF48.
In order to synthesize images from text descriptions, many frameworks have taken a minimalistic approach by creating small and background-less images BIBREF73. In most cases, the experiments were conducted on simple datasets, initially containing images of birds and flowers. BIBREF8 contributed to these data sets by adding corresponding natural language text descriptions to subsets of the CUB, MSCOCO, and Oxford-102 datasets, which facilitated the work on text-to-image synthesis for several papers released more recently.
While most deep learning algorithms use MNIST BIBREF74 dataset as the benchmark, there are three main datasets that are commonly used for evaluation of proposed GAN models for text-to-image synthesis: CUB BIBREF75, Oxford BIBREF76, COCO BIBREF77, and CIFAR-10 BIBREF78. CUB BIBREF75 contains 200 birds with matching text descriptions and Oxford BIBREF76 contains 102 categories of flowers with 40-258 images each and matching text descriptions. These datasets contain individual objects, with the text description corresponding to that object, making them relatively simple. COCO BIBREF77 is much more complex, containing 328k images with 91 different object types. CIFAI-10 BIBREF78 dataset consists of 60000 32$times$32 colour images in 10 classes, with 6000 images per class. In contrast to CUB and Oxford, whose images each contain an individual object, COCO’s images may contain multiple objects, each with a label, so there are many labels per image. The total number of labels over the 328k images is 2.5 million BIBREF77.
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Text-to-image Synthesis Benchmark Evaluation Metrics
Several evaluation metrics are used for judging the images produced by text-to-image GANs. Proposed by BIBREF25, Inception Scores (IS) calculates the entropy (randomness) of the conditional distribution, obtained by applying the Inception Model introduced in BIBREF79, and marginal distribution of a large set of generated images, which should be low and high, respectively, for meaningful images. Low entropy of conditional distribution means that the evaluator is confident that the images came from the data distribution, and high entropy of the marginal distribution means that the set of generated images is diverse, which are both desired features. The IS score is then computed as the KL-divergence between the two entropies. FCN-scores BIBREF2 are computed in a similar manner, relying on the intuition that realistic images generated by a GAN should be able to be classified correctly by a classifier trained on real images of the same distribution. Therefore, if the FCN classifier classifies a set of synthetic images accurately, the image is probably realistic, and the corresponding GAN gets a high FCN score. Frechet Inception Distance (FID) BIBREF80 is the other commonly used evaluation metric, and takes a different approach, actually comparing the generated images to real images in the distribution. A high FID means there is little relationship between statistics of the synthetic and real images and vice versa, so lower FIDs are better.
black The performance of different GANs with respect to the benchmark datasets and performance metrics is reported in Table TABREF48. In addition, Figure FIGREF49 further lists the performance of 14 GANs with respect to their Inception Scores (IS).
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: GAN Based Text-to-image Synthesis Results Comparison
While we gathered all the data we could find on scores for each model on the CUB, Oxford, and COCO datasets using IS, FID, FCN, and human classifiers, we unfortunately were unable to find certain data for AttnGAN and HDGAN (missing in Table TABREF48). The best evaluation we can give for those with missing data is our own opinions by looking at examples of generated images provided in their papers. In this regard, we observed that HDGAN produced relatively better visual results on the CUB and Oxford datasets while AttnGAN produced far more impressive results than the rest on the more complex COCO dataset. This is evidence that the attentional model and DAMSM introduced by AttnGAN are very effective in producing high-quality images. Examples of the best results of birds and plates of vegetables generated by each model are presented in Figures FIGREF50 and FIGREF51, respectively.
blackIn terms of inception score (IS), which is the metric that was applied to majority models except DC-GAN, the results in Table TABREF48 show that StackGAN++ only showed slight improvement over its predecessor, StackGAN, for text-to-image synthesis. However, StackGAN++ did introduce a very worthy enhancement for unconditional image generation by organizing the generators and discriminators in a “tree-like” structure. This indicates that revising the structures of the discriminators and/or generators can bring a moderate level of improvement in text-to-image synthesis.
blackIn addition, the results in Table TABREF48 also show that DM-GAN BIBREF53 has the best performance, followed by Obj-GAN BIBREF81. Notice that both DM-GAN and Obj-GAN are most recently developed methods in the field (both published in 2019), indicating that research in text to image synthesis is continuously improving the results for better visual perception and interception. Technical wise, DM-GAN BIBREF53 is a model using dynamic memory to refine fuzzy image contents initially generated from the GAN networks. A memory writing gate is used for DM-GAN to select important text information and generate images based on he selected text accordingly. On the other hand, Obj-GAN BIBREF81 focuses on object centered text-to-image synthesis. The proposed framework of Obj-GAN consists of a layout generation, including a bounding box generator and a shape generator, and an object-driven attentive image generator. The designs and advancement of DM-GAN and Obj-GAN indicate that research in text-to-image synthesis is advancing to put more emphasis on the image details and text semantics for better understanding and perception.
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Notable Mentions
It is worth noting that although this survey mainly focuses on text-to-image synthesis, there have been other applications of GANs in broader image synthesis field that we found fascinating and worth dedicating a small section to. For example, BIBREF72 used Sem-Latent GANs to generate images of faces based on facial attributes, producing impressive results that, at a glance, could be mistaken for real faces. BIBREF82, BIBREF70, and BIBREF83 demonstrated great success in generating text descriptions from images (image captioning) with great accuracy, with BIBREF82 using an attention-based model that automatically learns to focus on salient objects and BIBREF83 using deep visual-semantic alignments. Finally, there is a contribution made by StackGAN++ that was not mentioned in the dedicated section due to its relation to unconditional image generation as opposed to conditional, namely a color-regularization term BIBREF47. This additional term aims to keep the samples generated from the same input at different stages more consistent in color, which resulted in significantly better results for the unconditional model.
Conclusion
The recent advancement in text-to-image synthesis research opens the door to several compelling methods and architectures. The main objective of text-to-image synthesis initially was to create images from simple labels, and this objective later scaled to natural languages. In this paper, we reviewed novel methods that generate, in our opinion, the most visually-rich and photo-realistic images, from text-based natural language. These generated images often rely on generative adversarial networks (GANs), deep convolutional decoder networks, and multimodal learning methods.
blackIn the paper, we first proposed a taxonomy to organize GAN based text-to-image synthesis frameworks into four major groups: semantic enhancement GANs, resolution enhancement GANs, diversity enhancement GANs, and motion enhancement GANs. The taxonomy provides a clear roadmap to show the motivations, architectures, and difference of different methods, and also outlines their evolution timeline and relationships. Following the proposed taxonomy, we reviewed important features of each method and their architectures. We indicated the model definition and key contributions from some advanced GAN framworks, including StackGAN, StackGAN++, AttnGAN, DC-GAN, AC-GAN, TAC-GAN, HDGAN, Text-SeGAn, StoryGAN etc. Many of the solutions surveyed in this paper tackled the highly complex challenge of generating photo-realistic images beyond swatch size samples. In other words, beyond the work of BIBREF8 in which images were generated from text in 64$\times $64 tiny swatches. Lastly, all methods were evaluated on datasets that included birds, flowers, humans, and other miscellaneous elements. We were also able to allocate some important papers that were as impressive as the papers we finally surveyed. Though, these notable papers have yet to contribute directly or indirectly to the expansion of the vast computer vision AI field. Looking into the future, an excellent extension from the works surveyed in this paper would be to give more independence to the several learning methods (e.g. less human intervention) involved in the studies as well as increasing the size of the output images.
conflict of interest
The authors declare that there is no conflict of interest regarding the publication of this article. | HDGAN produced relatively better visual results on the CUB and Oxford datasets while AttnGAN produced far more impressive results than the rest on the more complex COCO dataset, In terms of inception score (IS), which is the metric that was applied to majority models except DC-GAN, the results in Table TABREF48 show that StackGAN++ only showed slight improvement over its predecessor, text to image synthesis is continuously improving the results for better visual perception and interception |
6389d5a152151fb05aae00b53b521c117d7b5e54 | 6389d5a152151fb05aae00b53b521c117d7b5e54_0 | Q: What is typical GAN architecture for each text-to-image synhesis group?
Text: Introduction
“ (GANs), and the variations that are now being proposed is the most interesting idea in the last 10 years in ML, in my opinion.” (2016)
– Yann LeCun
A picture is worth a thousand words! While written text provide efficient, effective, and concise ways for communication, visual content, such as images, is a more comprehensive, accurate, and intelligible method of information sharing and understanding. Generation of images from text descriptions, i.e. text-to-image synthesis, is a complex computer vision and machine learning problem that has seen great progress over recent years. Automatic image generation from natural language may allow users to describe visual elements through visually-rich text descriptions. The ability to do so effectively is highly desirable as it could be used in artificial intelligence applications such as computer-aided design, image editing BIBREF0, BIBREF1, game engines for the development of the next generation of video gamesBIBREF2, and pictorial art generation BIBREF3.
Introduction ::: blackTraditional Learning Based Text-to-image Synthesis
In the early stages of research, text-to-image synthesis was mainly carried out through a search and supervised learning combined process BIBREF4, as shown in Figure FIGREF4. In order to connect text descriptions to images, one could use correlation between keywords (or keyphrase) & images that identifies informative and “picturable” text units; then, these units would search for the most likely image parts conditioned on the text, eventually optimizing the picture layout conditioned on both the text and the image parts. Such methods often integrated multiple artificial intelligence key components, including natural language processing, computer vision, computer graphics, and machine learning.
The major limitation of the traditional learning based text-to-image synthesis approaches is that they lack the ability to generate new image content; they can only change the characteristics of the given/training images. Alternatively, research in generative models has advanced significantly and delivers solutions to learn from training images and produce new visual content. For example, Attribute2Image BIBREF5 models each image as a composite of foreground and background. In addition, a layered generative model with disentangled latent variables is learned, using a variational auto-encoder, to generate visual content. Because the learning is customized/conditioned by given attributes, the generative models of Attribute2Image can generate images with respect to different attributes, such as gender, hair color, age, etc., as shown in Figure FIGREF5.
Introduction ::: GAN Based Text-to-image Synthesis
Although generative model based text-to-image synthesis provides much more realistic image synthesis results, the image generation is still conditioned by the limited attributes. In recent years, several papers have been published on the subject of text-to-image synthesis. Most of the contributions from these papers rely on multimodal learning approaches that include generative adversarial networks and deep convolutional decoder networks as their main drivers to generate entrancing images from text BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11.
First introduced by Ian Goodfellow et al. BIBREF9, generative adversarial networks (GANs) consist of two neural networks paired with a discriminator and a generator. These two models compete with one another, with the generator attempting to produce synthetic/fake samples that will fool the discriminator and the discriminator attempting to differentiate between real (genuine) and synthetic samples. Because GANs' adversarial training aims to cause generators to produce images similar to the real (training) images, GANs can naturally be used to generate synthetic images (image synthesis), and this process can even be customized further by using text descriptions to specify the types of images to generate, as shown in Figure FIGREF6.
Much like text-to-speech and speech-to-text conversion, there exists a wide variety of problems that text-to-image synthesis could solve in the computer vision field specifically BIBREF8, BIBREF12. Nowadays, researchers are attempting to solve a plethora of computer vision problems with the aid of deep convolutional networks, generative adversarial networks, and a combination of multiple methods, often called multimodal learning methods BIBREF8. For simplicity, multiple learning methods will be referred to as multimodal learning hereafter BIBREF13. Researchers often describe multimodal learning as a method that incorporates characteristics from several methods, algorithms, and ideas. This can include ideas from two or more learning approaches in order to create a robust implementation to solve an uncommon problem or improve a solution BIBREF8, BIBREF14, BIBREF15, BIBREF16, BIBREF17.
black In this survey, we focus primarily on reviewing recent works that aim to solve the challenge of text-to-image synthesis using generative adversarial networks (GANs). In order to provide a clear roadmap, we propose a taxonomy to summarize reviewed GANs into four major categories. Our review will elaborate the motivations of methods in each category, analyze typical models, their network architectures, and possible drawbacks for further improvement. The visual abstract of the survey and the list of reviewed GAN frameworks is shown in Figure FIGREF8.
black The remainder of the survey is organized as follows. Section 2 presents a brief summary of existing works on subjects similar to that of this paper and highlights the key distinctions making ours unique. Section 3 gives a short introduction to GANs and some preliminary concepts related to image generation, as they are the engines that make text-to-image synthesis possible and are essential building blocks to achieve photo-realistic images from text descriptions. Section 4 proposes a taxonomy to summarize GAN based text-to-image synthesis, discusses models and architectures of novel works focused solely on text-to-image synthesis. This section will also draw key contributions from these works in relation to their applications. Section 5 reviews GAN based text-to-image synthesis benchmarks, performance metrics, and comparisons, including a simple review of GANs for other applications. In section 6, we conclude with a brief summary and outline ideas for future interesting developments in the field of text-to-image synthesis.
Related Work
With the growth and success of GANs, deep convolutional decoder networks, and multimodal learning methods, these techniques were some of the first procedures which aimed to solve the challenge of image synthesis. Many engineers and scientists in computer vision and AI have contributed through extensive studies and experiments, with numerous proposals and publications detailing their contributions. Because GANs, introduced by BIBREF9, are emerging research topics, their practical applications to image synthesis are still in their infancy. Recently, many new GAN architectures and designs have been proposed to use GANs for different applications, e.g. using GANs to generate sentimental texts BIBREF18, or using GANs to transform natural images into cartoons BIBREF19.
Although GANs are becoming increasingly popular, very few survey papers currently exist to summarize and outline contemporaneous technical innovations and contributions of different GAN architectures BIBREF20, BIBREF21. Survey papers specifically attuned to analyzing different contributions to text-to-image synthesis using GANs are even more scarce. We have thus found two surveys BIBREF6, BIBREF7 on image synthesis using GANs, which are the two most closely related publications to our survey objective. In the following paragraphs, we briefly summarize each of these surveys and point out how our objectives differ from theirs.
In BIBREF6, the authors provide an overview of image synthesis using GANs. In this survey, the authors discuss the motivations for research on image synthesis and introduce some background information on the history of GANs, including a section dedicated to core concepts of GANs, namely generators, discriminators, and the min-max game analogy, and some enhancements to the original GAN model, such as conditional GANs, addition of variational auto-encoders, etc.. In this survey, we will carry out a similar review of the background knowledge because the understanding of these preliminary concepts is paramount for the rest of the paper. Three types of approaches for image generation are reviewed, including direct methods (single generator and discriminator), hierarchical methods (two or more generator-discriminator pairs, each with a different goal), and iterative methods (each generator-discriminator pair generates a gradually higher-resolution image). Following the introduction, BIBREF6 discusses methods for text-to-image and image-to-image synthesis, respectively, and also describes several evaluation metrics for synthetic images, including inception scores and Frechet Inception Distance (FID), and explains the significance of the discriminators acting as learned loss functions as opposed to fixed loss functions.
Different from the above survey, which has a relatively broad scope in GANs, our objective is heavily focused on text-to-image synthesis. Although this topic, text-to-image synthesis, has indeed been covered in BIBREF6, they did so in a much less detailed fashion, mostly listing the many different works in a time-sequential order. In comparison, we will review several representative methods in the field and outline their models and contributions in detail.
Similarly to BIBREF6, the second survey paper BIBREF7 begins with a standard introduction addressing the motivation of image synthesis and the challenges it presents followed by a section dedicated to core concepts of GANs and enhancements to the original GAN model. In addition, the paper covers the review of two types of applications: (1) unconstrained applications of image synthesis such as super-resolution, image inpainting, etc., and (2) constrained image synthesis applications, namely image-to-image, text-to-image, and sketch-to image, and also discusses image and video editing using GANs. Again, the scope of this paper is intrinsically comprehensive, while we focus specifically on text-to-image and go into more detail regarding the contributions of novel state-of-the-art models.
Other surveys have been published on related matters, mainly related to the advancements and applications of GANs BIBREF22, BIBREF23, but we have not found any prior works which focus specifically on text-to-image synthesis using GANs. To our knowledge, this is the first paper to do so.
black
Preliminaries and Frameworks
In this section, we first introduce preliminary knowledge of GANs and one of its commonly used variants, conditional GAN (i.e. cGAN), which is the building block for many GAN based text-to-image synthesis models. After that, we briefly separate GAN based text-to-image synthesis into two types, Simple GAN frameworks vs. Advanced GAN frameworks, and discuss why advanced GAN architecture for image synthesis.
black Notice that the simple vs. advanced GAN framework separation is rather too brief, our taxonomy in the next section will propose a taxonomy to summarize advanced GAN frameworks into four categories, based on their objective and designs.
Preliminaries and Frameworks ::: Generative Adversarial Neural Network
Before moving on to a discussion and analysis of works applying GANs for text-to-image synthesis, there are some preliminary concepts, enhancements of GANs, datasets, and evaluation metrics that are present in some of the works described in the next section and are thus worth introducing.
As stated previously, GANs were introduced by Ian Goodfellow et al. BIBREF9 in 2014, and consist of two deep neural networks, a generator and a discriminator, which are trained independently with conflicting goals: The generator aims to generate samples closely related to the original data distribution and fool the discriminator, while the discriminator aims to distinguish between samples from the generator model and samples from the true data distribution by calculating the probability of the sample coming from either source. A conceptual view of the generative adversarial network (GAN) architecture is shown in Figure FIGREF11.
The training of GANs is an iterative process that, with each iteration, updates the generator and the discriminator with the goal of each defeating the other. leading each model to become increasingly adept at its specific task until a threshold is reached. This is analogous to a min-max game between the two models, according to the following equation:
In Eq. (DISPLAY_FORM10), $x$ denotes a multi-dimensional sample, e.g., an image, and $z$ denotes a multi-dimensional latent space vector, e.g., a multidimensional data point following a predefined distribution function such as that of normal distributions. $D_{\theta _d}()$ denotes a discriminator function, controlled by parameters $\theta _d$, which aims to classify a sample into a binary space. $G_{\theta _g}()$ denotes a generator function, controlled by parameters $\theta _g$, which aims to generate a sample from some latent space vector. For example, $G_{\theta _g}(z)$ means using a latent vector $z$ to generate a synthetic/fake image, and $D_{\theta _d}(x)$ means to classify an image $x$ as binary output (i.e. true/false or 1/0). In the GAN setting, the discriminator $D_{\theta _d}()$ is learned to distinguish a genuine/true image (labeled as 1) from fake images (labeled as 0). Therefore, given a true image $x$, the ideal output from the discriminator $D_{\theta _d}(x)$ would be 1. Given a fake image generated from the generator $G_{\theta _g}(z)$, the ideal prediction from the discriminator $D_{\theta _d}(G_{\theta _g}(z))$ would be 0, indicating the sample is a fake image.
Following the above definition, the $\min \max $ objective function in Eq. (DISPLAY_FORM10) aims to learn parameters for the discriminator ($\theta _d$) and generator ($\theta _g$) to reach an optimization goal: The discriminator intends to differentiate true vs. fake images with maximum capability $\max _{\theta _d}$ whereas the generator intends to minimize the difference between a fake image vs. a true image $\min _{\theta _g}$. In other words, the discriminator sets the characteristics and the generator produces elements, often images, iteratively until it meets the attributes set forth by the discriminator. GANs are often used with images and other visual elements and are notoriously efficient in generating compelling and convincing photorealistic images. Most recently, GANs were used to generate an original painting in an unsupervised fashion BIBREF24. The following sections go into further detail regarding how the generator and discriminator are trained in GANs.
Generator - In image synthesis, the generator network can be thought of as a mapping from one representation space (latent space) to another (actual data) BIBREF21. When it comes to image synthesis, all of the images in the data space fall into some distribution in a very complex and high-dimensional feature space. Sampling from such a complex space is very difficult, so GANs instead train a generator to create synthetic images from a much more simple feature space (usually random noise) called the latent space. The generator network performs up-sampling of the latent space and is usually a deep neural network consisting of several convolutional and/or fully connected layers BIBREF21. The generator is trained using gradient descent to update the weights of the generator network with the aim of producing data (in our case, images) that the discriminator classifies as real.
Discriminator - The discriminator network can be thought of as a mapping from image data to the probability of the image coming from the real data space, and is also generally a deep neural network consisting of several convolution and/or fully connected layers. However, the discriminator performs down-sampling as opposed to up-sampling. Like the generator, it is trained using gradient descent but its goal is to update the weights so that it is more likely to correctly classify images as real or fake.
In GANs, the ideal outcome is for both the generator's and discriminator's cost functions to converge so that the generator produces photo-realistic images that are indistinguishable from real data, and the discriminator at the same time becomes an expert at differentiating between real and synthetic data. This, however, is not possible since a reduction in cost of one model generally leads to an increase in cost of the other. This phenomenon makes training GANs very difficult, and training them simultaneously (both models performing gradient descent in parallel) often leads to a stable orbit where neither model is able to converge. To combat this, the generator and discriminator are often trained independently. In this case, the GAN remains the same, but there are different training stages. In one stage, the weights of the generator are kept constant and gradient descent updates the weights of the discriminator, and in the other stage the weights of the discriminator are kept constant while gradient descent updates the weights of the generator. This is repeated for some number of epochs until a desired low cost for each model is reached BIBREF25.
Preliminaries and Frameworks ::: cGAN: Conditional GAN
Conditional Generative Adversarial Networks (cGAN) are an enhancement of GANs proposed by BIBREF26 shortly after the introduction of GANs by BIBREF9. The objective function of the cGAN is defined in Eq. (DISPLAY_FORM13) which is very similar to the GAN objective function in Eq. (DISPLAY_FORM10) except that the inputs to both discriminator and generator are conditioned by a class label $y$.
The main technical innovation of cGAN is that it introduces an additional input or inputs to the original GAN model, allowing the model to be trained on information such as class labels or other conditioning variables as well as the samples themselves, concurrently. Whereas the original GAN was trained only with samples from the data distribution, resulting in the generated sample reflecting the general data distribution, cGAN enables directing the model to generate more tailored outputs.
In Figure FIGREF14, the condition vector is the class label (text string) "Red bird", which is fed to both the generator and discriminator. It is important, however, that the condition vector is related to the real data. If the model in Figure FIGREF14 was trained with the same set of real data (red birds) but the condition text was "Yellow fish", the generator would learn to create images of red birds when conditioned with the text "Yellow fish".
Note that the condition vector in cGAN can come in many forms, such as texts, not just limited to the class label. Such a unique design provides a direct solution to generate images conditioned by predefined specifications. As a result, cGAN has been used in text-to-image synthesis since the very first day of its invention although modern approaches can deliver much better text-to-image synthesis results.
black
Preliminaries and Frameworks ::: Simple GAN Frameworks for Text-to-Image Synthesis
In order to generate images from text, one simple solution is to employ the conditional GAN (cGAN) designs and add conditions to the training samples, such that the GAN is trained with respect to the underlying conditions. Several pioneer works have followed similar designs for text-to-image synthesis.
black An essential disadvantage of using cGAN for text-to-image synthesis is that that it cannot handle complicated textual descriptions for image generation, because cGAN uses labels as conditions to restrict the GAN inputs. If the text inputs have multiple keywords (or long text descriptions) they cannot be used simultaneously to restrict the input. Instead of using text as conditions, another two approaches BIBREF8, BIBREF16 use text as input features, and concatenate such features with other features to train discriminator and generator, as shown in Figure FIGREF15(b) and (c). To ensure text being used as GAN input, a feature embedding or feature representation learning BIBREF29, BIBREF30 function $\varphi ()$ is often introduced to convert input text as numeric features, which are further concatenated with other features to train GANs.
black
Preliminaries and Frameworks ::: Advanced GAN Frameworks for Text-to-Image Synthesis
Motivated by the GAN and conditional GAN (cGAN) design, many GAN based frameworks have been proposed to generate images, with different designs and architectures, such as using multiple discriminators, using progressively trained discriminators, or using hierarchical discriminators. Figure FIGREF17 outlines several advanced GAN frameworks in the literature. In addition to these frameworks, many news designs are being proposed to advance the field with rather sophisticated designs. For example, a recent work BIBREF37 proposes to use a pyramid generator and three independent discriminators, blackeach focusing on a different aspect of the images, to lead the generator towards creating images that are photo-realistic on multiple levels. Another recent publication BIBREF38 proposes to use discriminator to measure semantic relevance between image and text instead of class prediction (like most discriminator in GANs does), resulting a new GAN structure outperforming text conditioned auxiliary classifier (TAC-GAN) BIBREF16 and generating diverse, realistic, and relevant to the input text regardless of class.
black In the following section, we will first propose a taxonomy that summarizes advanced GAN frameworks for text-to-image synthesis, and review most recent proposed solutions to the challenge of generating photo-realistic images conditioned on natural language text descriptions using GANs. The solutions we discuss are selected based on relevance and quality of contributions. Many publications exist on the subject of image-generation using GANs, but in this paper we focus specifically on models for text-to-image synthesis, with the review emphasizing on the “model” and “contributions” for text-to-image synthesis. At the end of this section, we also briefly review methods using GANs for other image-synthesis applications.
black
Text-to-Image Synthesis Taxonomy and Categorization
In this section, we propose a taxonomy to summarize advanced GAN based text-to-image synthesis frameworks, as shown in Figure FIGREF24. The taxonomy organizes GAN frameworks into four categories, including Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANs, and Motion Enhancement GAGs. Following the proposed taxonomy, each subsection will introduce several typical frameworks and address their techniques of using GANS to solve certain aspects of the text-to-mage synthesis challenges.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: GAN based Text-to-Image Synthesis Taxonomy
Although the ultimate goal of Text-to-Image synthesis is to generate images closely related to the textual descriptions, the relevance of the images to the texts are often validated from different perspectives, due to the inherent diversity of human perceptions. For example, when generating images matching to the description “rose flowers”, some users many know the exact type of flowers they like and intend to generate rose flowers with similar colors. Other users, may seek to generate high quality rose flowers with a nice background (e.g. garden). The third group of users may be more interested in generating flowers similar to rose but with different colors and visual appearance, e.g. roses, begonia, and peony. The fourth group of users may want to not only generate flower images, but also use them to form a meaningful action, e.g. a video clip showing flower growth, performing a magic show using those flowers, or telling a love story using the flowers.
blackFrom the text-to-Image synthesis point of view, the first group of users intend to precisely control the semantic of the generated images, and their goal is to match the texts and images at the semantic level. The second group of users are more focused on the resolutions and the qualify of the images, in addition to the requirement that the images and texts are semantically related. For the third group of users, their goal is to diversify the output images, such that their images carry diversified visual appearances and are also semantically related. The fourth user group adds a new dimension in image synthesis, and aims to generate sequences of images which are coherent in temporal order, i.e. capture the motion information.
black Based on the above descriptions, we categorize GAN based Text-to-Image Synthesis into a taxonomy with four major categories, as shown in Fig. FIGREF24.
Semantic Enhancement GANs: Semantic enhancement GANs represent pioneer works of GAN frameworks for text-to-image synthesis. The main focus of the GAN frameworks is to ensure that the generated images are semantically related to the input texts. This objective is mainly achieved by using a neural network to encode texts as dense features, which are further fed to a second network to generate images matching to the texts.
Resolution Enhancement GANs: Resolution enhancement GANs mainly focus on generating high qualify images which are semantically matched to the texts. This is mainly achieved through a multi-stage GAN framework, where the outputs from earlier stage GANs are fed to the second (or later) stage GAN to generate better qualify images.
Diversity Enhancement GANs: Diversity enhancement GANs intend to diversify the output images, such that the generated images are not only semantically related but also have different types and visual appearance. This objective is mainly achieved through an additional component to estimate semantic relevance between generated images and texts, in order to maximize the output diversity.
Motion Enhancement GANs: Motion enhancement GANs intend to add a temporal dimension to the output images, such that they can form meaningful actions with respect to the text descriptions. This goal mainly achieved though a two-step process which first generates images matching to the “actions” of the texts, followed by a mapping or alignment procedure to ensure that images are coherent in the temporal order.
black In the following, we will introduce how these GAN frameworks evolve for text-to-image synthesis, and will also review some typical methods of each category.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs
Semantic relevance is one the of most important criteria of the text-to-image synthesis. For most GNAs discussed in this survey, they are required to generate images semantically related to the text descriptions. However, the semantic relevance is a rather subjective measure, and images are inherently rich in terms of its semantics and interpretations. Therefore, many GANs are further proposed to enhance the text-to-image synthesis from different perspectives. In this subsection, we will review several classical approaches which are commonly served as text-to-image synthesis baseline.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs ::: DC-GAN
Deep convolution generative adversarial network (DC-GAN) BIBREF8 represents the pioneer work for text-to-image synthesis using GANs. Its main goal is to train a deep convolutional generative adversarial network (DC-GAN) on text features. During this process these text features are encoded by another neural network. This neural network is a hybrid convolutional recurrent network at the character level. Concurrently, both neural networks have also feed-forward inference in the way they condition text features. Generating realistic images automatically from natural language text is the motivation of several of the works proposed in this computer vision field. However, actual artificial intelligence (AI) systems are far from achieving this task BIBREF8, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. Lately, recurrent neural networks led the way to develop frameworks that learn discriminatively on text features. At the same time, generative adversarial networks (GANs) began recently to show some promise on generating compelling images of a whole host of elements including but not limited to faces, birds, flowers, and non-common images such as room interiorsBIBREF8. DC-GAN is a multimodal learning model that attempts to bridge together both of the above mentioned unsupervised machine learning algorithms, the recurrent neural networks (RNN) and generative adversarial networks (GANs), with the sole purpose of speeding the generation of text-to-image synthesis.
black Deep learning shed some light to some of the most sophisticated advances in natural language representation, image synthesis BIBREF7, BIBREF8, BIBREF43, BIBREF35, and classification of generic data BIBREF44. However, a bulk of the latest breakthroughs in deep learning and computer vision were related to supervised learning BIBREF8. Even though natural language and image synthesis were part of several contributions on the supervised side of deep learning, unsupervised learning saw recently a tremendous rise in input from the research community specially on two subproblems: text-based natural language and image synthesis BIBREF45, BIBREF14, BIBREF8, BIBREF46, BIBREF47. These subproblems are typically subdivided as focused research areas. DC-GAN's contributions are mainly driven by these two research areas. In order to generate plausible images from natural language, DC-GAN contributions revolve around developing a straightforward yet effective GAN architecture and training strategy that allows natural text to image synthesis. These contributions are primarily tested on the Caltech-UCSD Birds and Oxford-102 Flowers datasets. Each image in these datasets carry five text descriptions. These text descriptions were created by the research team when setting up the evaluation environment. The DC-GANs model is subsequently trained on several subcategories. Subcategories in this research represent the training and testing sub datasets. The performance shown by these experiments display a promising yet effective way to generate images from textual natural language descriptions BIBREF8.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs ::: DC-GAN Extensions
Following the pioneer DC-GAN framework BIBREF8, many researches propose revised network structures (e.g. different discriminaotrs) in order to improve images with better semantic relevance to the texts. Based on the deep convolutional adversarial network (DC-GAN) network architecture, GAN-CLS with image-text matching discriminator, GAN-INT learned with text manifold interpolation and GAN-INT-CLS which combines both are proposed to find semantic match between text and image. Similar to the DC-GAN architecture, an adaptive loss function (i.e. Perceptual Loss BIBREF48) is proposed for semantic image synthesis which can synthesize a realistic image that not only matches the target text description but also keep the irrelavant features(e.g. background) from source images BIBREF49. Regarding to the Perceptual Losses, three loss functions (i.e. Pixel reconstruction loss, Activation reconstruction loss and Texture reconstruction loss) are proposed in BIBREF50 in which they construct the network architectures based on the DC-GAN, i.e. GAN-INT-CLS-Pixel, GAN-INT-CLS-VGG and GAN-INT-CLS-Gram with respect to three losses. In BIBREF49, a residual transformation unit is added in the network to retain similar structure of the source image.
black Following the BIBREF49 and considering the features in early layers address background while foreground is obtained in latter layers in CNN, a pair of discriminators with different architectures (i.e. Paired-D GAN) is proposed to synthesize background and foreground from a source image seperately BIBREF51. Meanwhile, the skip-connection in the generator is employed to more precisely retain background information in the source image.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Semantic Enhancement GANs ::: MC-GAN
When synthesising images, most text-to-image synthesis methods consider each output image as one single unit to characterize its semantic relevance to the texts. This is likely problematic because most images naturally consist of two crucial components: foreground and background. Without properly separating these two components, it's hard to characterize the semantics of an image if the whole image is treated as a single unit without proper separation.
black In order to enhance the semantic relevance of the images, a multi-conditional GAN (MC-GAN) BIBREF52 is proposed to synthesize a target image by combining the background of a source image and a text-described foreground object which does not exist in the source image. A unique feature of MC-GAN is that it proposes a synthesis block in which the background feature is extracted from the given image without non-linear function (i.e. only using convolution and batch normalization) and the foreground feature is the feature map from the previous layer.
black Because MC-GAN is able to properly model the background and foreground of the generated images, a unique strength of MC-GAN is that users are able to provide a base image and MC-GAN is able to preserve the background information of the base image to generate new images. black
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs
Due to the fact that training GANs will be much difficult when generating high-resolution images, a two stage GAN (i.e. stackGAN) is proposed in which rough images(i.e. low-resolution images) are generated in stage-I and refined in stage-II. To further improve the quality of generated images, the second version of StackGAN (i.e. Stack++) is proposed to use multi-stage GANs to generate multi-scale images. A color-consistency regularization term is also added into the loss to keep the consistency of images in different scales.
black While stackGAN and StackGAN++ are both built on the global sentence vector, AttnGAN is proposed to use attention mechanism (i.e. Deep Attentional Multimodal Similarity Model (DAMSM)) to model the multi-level information (i.e. word level and sentence level) into GANs. In the following, StackGAN, StackGAN++ and AttnGAN will be explained in detail.
black Recently, Dynamic Memory Generative Adversarial Network (i.e. DM-GAN)BIBREF53 which uses a dynamic memory component is proposed to focus on refiningthe initial generated image which is the key to the success of generating high quality images.
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: StackGAN
In 2017, Zhang et al. proposed a model for generating photo-realistic images from text descriptions called StackGAN (Stacked Generative Adversarial Network) BIBREF33. In their work, they define a two-stage model that uses two cascaded GANs, each corresponding to one of the stages. The stage I GAN takes a text description as input, converts the text description to a text embedding containing several conditioning variables, and generates a low-quality 64x64 image with rough shapes and colors based on the computed conditioning variables. The stage II GAN then takes this low-quality stage I image as well as the same text embedding and uses the conditioning variables to correct and add more detail to the stage I result. The output of stage II is a photorealistic 256$times$256 image that resembles the text description with compelling accuracy.
One major contribution of StackGAN is the use of cascaded GANs for text-to-image synthesis through a sketch-refinement process. By conditioning the stage II GAN on the image produced by the stage I GAN and text description, the stage II GAN is able to correct defects in the stage I output, resulting in high-quality 256x256 images. Prior works have utilized “stacked” GANs to separate the image generation process into structure and style BIBREF42, multiple stages each generating lower-level representations from higher-level representations of the previous stage BIBREF35, and multiple stages combined with a laplacian pyramid approach BIBREF54, which was introduced for image compression by P. Burt and E. Adelson in 1983 and uses the differences between consecutive down-samples of an original image to reconstruct the original image from its down-sampled version BIBREF55. However, these works did not use text descriptions to condition their generator models.
Conditioning Augmentation is the other major contribution of StackGAN. Prior works transformed the natural language text description into a fixed text embedding containing static conditioning variables which were fed to the generator BIBREF8. StackGAN does this and then creates a Gaussian distribution from the text embedding and randomly selects variables from the Gaussian distribution to add to the set of conditioning variables during training. This encourages robustness by introducing small variations to the original text embedding for a particular training image while keeping the training image that the generated output is compared to the same. The result is that the trained model produces more diverse images in the same distribution when using Conditioning Augmentation than the same model using a fixed text embedding BIBREF33.
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: StackGAN++
Proposed by the same users as StackGAN, StackGAN++ is also a stacked GAN model, but organizes the generators and discriminators in a “tree-like” structure BIBREF47 with multiple stages. The first stage combines a noise vector and conditioning variables (with Conditional Augmentation introduced in BIBREF33) for input to the first generator, which generates a low-resolution image, 64$\times $64 by default (this can be changed depending on the desired number of stages). Each following stage uses the result from the previous stage and the conditioning variables to produce gradually higher-resolution images. These stages do not use the noise vector again, as the creators assume that the randomness it introduces is already preserved in the output of the first stage. The final stage produces a 256$\times $256 high-quality image.
StackGAN++ introduces the joint conditional and unconditional approximation in their designs BIBREF47. The discriminators are trained to calculate the loss between the image produced by the generator and the conditioning variables (measuring how accurately the image represents the description) as well as the loss between the image and real images (probability of the image being real or fake). The generators then aim to minimize the sum of these losses, improving the final result.
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: AttnGAN
Attentional Generative Adversarial Network (AttnGAN) BIBREF10 is very similar, in terms of its structure, to StackGAN++ BIBREF47, discussed in the previous section, but some novel components are added. Like previous works BIBREF56, BIBREF8, BIBREF33, BIBREF47, a text encoder generates a text embedding with conditioning variables based on the overall sentence. Additionally, the text encoder generates a separate text embedding with conditioning variables based on individual words. This process is optimized to produce meaningful variables using a bidirectional recurrent neural network (BRNN), more specifically bidirectional Long Short Term Memory (LSTM) BIBREF57, which, for each word in the description, generates conditions based on the previous word as well as the next word (bidirectional). The first stage of AttnGAN generates a low-resolution image based on the sentence-level text embedding and random noise vector. The output is fed along with the word-level text embedding to an “attention model”, which matches the word-level conditioning variables to regions of the stage I image, producing a word-context matrix. This is then fed to the next stage of the model along with the raw previous stage output. Each consecutive stage works in the same manner, but produces gradually higher-resolution images conditioned on the previous stage.
Two major contributions were introduced in AttnGAN: the attentional generative network and the Deep Attentional Multimodal Similarity Model (DAMSM) BIBREF47. The attentional generative network matches specific regions of each stage's output image to conditioning variables from the word-level text embedding. This is a very worthy contribution, allowing each consecutive stage to focus on specific regions of the image independently, adding “attentional” details region by region as opposed to the whole image. The DAMSM is also a key feature introduced by AttnGAN, which is used after the result of the final stage to calculate the similarity between the generated image and the text embedding at both the sentence level and the more fine-grained word level. Table TABREF48 shows scores from different metrics for StackGAN, StackGAN++, AttnGAN, and HDGAN on the CUB, Oxford, and COCO datasets. The table shows that AttnGAN outperforms the other models in terms of IS on the CUB dataset by a small amount and greatly outperforms them on the COCO dataset.
Text-to-Image Synthesis Taxonomy and Categorization ::: Resolution Enhancement GANs ::: HDGAN
Hierarchically-nested adversarial network (HDGAN) is a method proposed by BIBREF36, and its main objective is to tackle the difficult problem of dealing with photographic images from semantic text descriptions. These semantic text descriptions are applied on images from diverse datasets. This method introduces adversarial objectives nested inside hierarchically oriented networks BIBREF36. Hierarchical networks helps regularize mid-level manifestations. In addition to regularize mid-level manifestations, it assists the training of the generator in order to capture highly complex still media elements. These elements are captured in statistical order to train the generator based on settings extracted directly from the image. The latter is an ideal scenario. However, this paper aims to incorporate a single-stream architecture. This single-stream architecture functions as the generator that will form an optimum adaptability towards the jointed discriminators. Once jointed discriminators are setup in an optimum manner, the single-stream architecture will then advance generated images to achieve a much higher resolution BIBREF36.
The main contributions of the HDGANs include the introduction of a visual-semantic similarity measure BIBREF36. This feature will aid in the evaluation of the consistency of generated images. In addition to checking the consistency of generated images, one of the key objectives of this step is to test the logical consistency of the end product BIBREF36. The end product in this case would be images that are semantically mapped from text-based natural language descriptions to each area on the picture e.g. a wing on a bird or petal on a flower. Deep learning has created a multitude of opportunities and challenges for researchers in the computer vision AI field. Coupled with GAN and multimodal learning architectures, this field has seen tremendous growth BIBREF8, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. Based on these advancements, HDGANs attempt to further extend some desirable and less common features when generating images from textual natural language BIBREF36. In other words, it takes sentences and treats them as a hierarchical structure. This has some positive and negative implications in most cases. For starters, it makes it more complex to generate compelling images. However, one of the key benefits of this elaborate process is the realism obtained once all processes are completed. In addition, one common feature added to this process is the ability to identify parts of sentences with bounding boxes. If a sentence includes common characteristics of a bird, it will surround the attributes of such bird with bounding boxes. In practice, this should happen if the desired image have other elements such as human faces (e.g. eyes, hair, etc), flowers (e.g. petal size, color, etc), or any other inanimate object (e.g. a table, a mug, etc). Finally, HDGANs evaluated some of its claims on common ideal text-to-image datasets such as CUB, COCO, and Oxford-102 BIBREF8, BIBREF36, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF22, BIBREF26. These datasets were first utilized on earlier works BIBREF8, and most of them sport modified features such image annotations, labels, or descriptions. The qualitative and quantitative results reported by researchers in this study were far superior of earlier works in this same field of computer vision AI.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs
In this subsection, we introduce text-to-image synthesis methods which try to maximize the diversity of the output images, based on the text descriptions.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: AC-GAN
Two issues arise in the traditional GANs BIBREF58 for image synthesis: (1) scalabilirty problem: traditional GANs cannot predict a large number of image categories; and (2) diversity problem: images are often subject to one-to-many mapping, so one image could be labeled as different tags or being described using different texts. To address these problems, GAN conditioned on additional information, e.g. cGAN, is an alternative solution. However, although cGAN and many previously introduced approaches are able to generate images with respect to the text descriptions, they often output images with similar types and visual appearance.
black Slightly different from the cGAN, auxiliary classifier GANs (AC-GAN) BIBREF27 proposes to improve the diversity of output images by using an auxiliary classifier to control output images. The overall structure of AC-GAN is shown in Fig. FIGREF15(c). In AC-GAN, every generated image is associated with a class label, in addition to the true/fake label which are commonly used in GAN or cGAN. The discriminator of AC-GAN not only outputs a probability distribution over sources (i.e. whether the image is true or fake), it also output a probability distribution over the class label (i.e. predict which class the image belong to).
black By using an auxiliary classifier layer to predict the class of the image, AC-GAN is able to use the predicted class labels of the images to ensure that the output consists of images from different classes, resulting in diversified synthesis images. The results show that AC-GAN can generate images with high diversity.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: TAC-GAN
Building on the AC-GAN, TAC-GAN BIBREF59 is proposed to replace the class information with textual descriptions as the input to perform the task of text to image synthesis. The architecture of TAC-GAN is shown in Fig. FIGREF15(d), which is similar to AC-GAN. Overall, the major difference between TAC-GAN and AC-GAN is that TAC-GAN conditions the generated images on text descriptions instead of on a class label. This design makes TAC-GAN more generic for image synthesis.
black For TAC-GAN, it imposes restrictions on generated images in both texts and class labels. The input vector of TAC-GAN's generative network is built based on a noise vector and embedded vector representation of textual descriptions. The discriminator of TAC-GAN is similar to that of the AC-GAN, which not only predicts whether the image is fake or not, but also predicts the label of the images. A minor difference of TAC-GAN's discriminator, compared to that of the AC-GAN, is that it also receives text information as input before performing its classification.
black The experiments and validations, on the Oxford-102 flowers dataset, show that the results produced by TAC-GAN are “slightly better” that other approaches, including GAN-INT-CLS and StackGAN.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: Text-SeGAN
In order to improve the diversity of the output images, both AC-GAN and TAC-GAN's discriminators predict class labels of the synthesised images. This process likely enforces the semantic diversity of the images, but class labels are inherently restrictive in describing image semantics, and images described by text can be matched to multiple labels. Therefore, instead of predicting images' class labels, an alternative solution is to directly quantify their semantic relevance.
black The architecture of Text-SeGAN is shown in Fig. FIGREF15(e). In order to directly quantify semantic relevance, Text-SeGAN BIBREF28 adds a regression layer to estimate the semantic relevance between the image and text instead of a classifier layer of predicting labels. The estimated semantic reference is a fractional value ranging between 0 and 1, with a higher value reflecting better semantic relevance between the image and text. Due to this unique design, an inherent advantage of Text-SeGAN is that the generated images are not limited to certain classes and are semantically matching to the text input.
black Experiments and validations, on Oxford-102 flower dataset, show that Text-SeGAN can generate diverse images that are semantically relevant to the input text. In addition, the results of Text-SeGAN show improved inception score compared to other approaches, including GAN-INT-CLS, StackGAN, TAC-GAN, and HDGAN.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Diversity Enhancement GANs ::: MirrorGAN and Scene Graph GAN
Due to the inherent complexity of the visual images, and the diversity of text descriptions (i.e. same words could imply different meanings), it is difficulty to precisely match the texts to the visual images at the semantic levels. For most methods we have discussed so far, they employ a direct text to image generation process, but there is no validation about how generated images comply with the text in a reverse fashion.
black To ensure the semantic consistency and diversity, MirrorGAN BIBREF60 employs a mirror structure, which reversely learns from generated images to output texts (an image-to-text process) to further validate whether generated are indeed consistent to the input texts. MirrowGAN includes three modules: a semantic text embedding module (STEM), a global-local collaborative attentive module for cascaded image generation (GLAM), and a semantic text regeneration and alignment module (STREAM). The back to back Text-to-Image (T2I) and Image-to-Text (I2T) are combined to progressively enhance the diversity and semantic consistency of the generated images.
black In order to enhance the diversity of the output image, Scene Graph GAN BIBREF61 proposes to use visual scene graphs to describe the layout of the objects, allowing users to precisely specific the relationships between objects in the images. In order to convert the visual scene graph as input for GAN to generate images, this method uses graph convolution to process input graphs. It computes a scene layout by predicting bounding boxes and segmentation masks for objects. After that, it converts the computed layout to an image with a cascaded refinement network.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs
Instead of focusing on generating static images, another line of text-to-image synthesis research focuses on generating videos (i.e. sequences of images) from texts. In this context, the synthesised videos are often useful resources for automated assistance or story telling.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs ::: ObamaNet and T2S
One early/interesting work of motion enhancement GANs is to generate spoofed speech and lip-sync videos (or talking face) of Barack Obama (i.e. ObamaNet) based on text input BIBREF62. This framework is consisted of three parts, i.e. text to speech using “Char2Wav”, mouth shape representation synced to the audio using a time-delayed LSTM and “video generation” conditioned on the mouth shape using “U-Net” architecture. Although the results seem promising, ObamaNet only models the mouth region and the videos are not generated from noise which can be regarded as video prediction other than video generation.
black Another meaningful trial of using synthesised videos for automated assistance is to translate spoken language (e.g. text) into sign language video sequences (i.e. T2S) BIBREF63. This is often achieved through a two step process: converting texts as meaningful units to generate images, followed by a learning component to arrange images into sequential order for best representation. More specifically, using RNN based machine translation methods, texts are translated into sign language gloss sequences. Then, glosses are mapped to skeletal pose sequences using a lookup-table. To generate videos, a conditional DCGAN with the input of concatenation of latent representation of the image for a base pose and skeletal pose information is built.
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs ::: T2V
In BIBREF64, a text-to-video model (T2V) is proposed based on the cGAN in which the input is the isometric Gaussian noise with the text-gist vector served as the generator. A key component of generating videos from text is to train a conditional generative model to extract both static and dynamic information from text, followed by a hybrid framework combining a Variational Autoencoder (VAE) and a Generative Adversarial Network (GAN).
black More specifically, T2V relies on two types of features, static features and dynamic features, to generate videos. Static features, called “gist” are used to sketch text-conditioned background color and object layout structure. Dynamic features, on the other hand, are considered by transforming input text into an image filter which eventually forms the video generator which consists of three entangled neural networks. The text-gist vector is generated by a gist generator which maintains static information (e.g. background) and a text2filter which captures the dynamic information (i.e. actions) in the text to generate videos.
black As demonstrated in the paper BIBREF64, the generated videos are semantically related to the texts, but have a rather low quality (e.g. only $64 \times 64$ resolution).
black
Text-to-Image Synthesis Taxonomy and Categorization ::: Motion Enhancement GANs ::: StoryGAN
Different from T2V which generates videos from a single text, StoryGAN aims to produce dynamic scenes consistent of specified texts (i.e. story written in a multi-sentence paragraph) using a sequential GAN model BIBREF65. Story encoder, context encoder, and discriminators are the main components of this model. By using stochastic sampling, the story encoder intends to learn an low-dimensional embedding vector for the whole story to keep the continuity of the story. The context encoder is proposed to capture contextual information during sequential image generation based on a deep RNN. Two discriminators of StoryGAN are image discriminator which evaluates the generated images and story discriminator which ensures the global consistency.
black The experiments and comparisons, on CLEVR dataset and Pororo cartoon dataset which are originally used for visual question answering, show that StoryGAN improves the generated video qualify in terms of Structural Similarity Index (SSIM), visual qualify, consistence, and relevance (the last three measure are based on human evaluation).
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Text-to-image Synthesis Applications
Computer vision applications have strong potential for industries including but not limited to the medical, government, military, entertainment, and online social media fields BIBREF7, BIBREF66, BIBREF67, BIBREF68, BIBREF69, BIBREF70. Text-to-image synthesis is one such application in computer vision AI that has become the main focus in recent years due to its potential for providing beneficial properties and opportunities for a wide range of applicable areas.
Text-to-image synthesis is an application byproduct of deep convolutional decoder networks in combination with GANs BIBREF7, BIBREF8, BIBREF10. Deep convolutional networks have contributed to several breakthroughs in image, video, speech, and audio processing. This learning method intends, among other possibilities, to help translate sequential text descriptions to images supplemented by one or many additional methods. Algorithms and methods developed in the computer vision field have allowed researchers in recent years to create realistic images from plain sentences. Advances in the computer vision, deep convolutional nets, and semantic units have shined light and redirected focus to this research area of text-to-image synthesis, having as its prime directive: to aid in the generation of compelling images with as much fidelity to text descriptions as possible.
To date, models for generating synthetic images from textual natural language in research laboratories at universities and private companies have yielded compelling images of flowers and birds BIBREF8. Though flowers and birds are the most common objects studied thus far, research has been applied to other classes as well. For example, there have been studies focused solely on human faces BIBREF7, BIBREF8, BIBREF71, BIBREF72.
It’s a fascinating time for computer vision AI and deep learning researchers and enthusiasts. The consistent advancement in hardware, software, and contemporaneous development of computer vision AI research disrupts multiple industries. These advances in technology allow for the extraction of several data types from a variety of sources. For example, image data captured from a variety of photo-ready devices, such as smart-phones, and online social media services opened the door to the analysis of large amounts of media datasets BIBREF70. The availability of large media datasets allow new frameworks and algorithms to be proposed and tested on real-world data.
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Text-to-image Synthesis Benchmark Datasets
A summary of some reviewed methods and benchmark datasets used for validation is reported in Table TABREF43. In addition, the performance of different GANs with respect to the benchmark datasets and performance metrics is reported in Table TABREF48.
In order to synthesize images from text descriptions, many frameworks have taken a minimalistic approach by creating small and background-less images BIBREF73. In most cases, the experiments were conducted on simple datasets, initially containing images of birds and flowers. BIBREF8 contributed to these data sets by adding corresponding natural language text descriptions to subsets of the CUB, MSCOCO, and Oxford-102 datasets, which facilitated the work on text-to-image synthesis for several papers released more recently.
While most deep learning algorithms use MNIST BIBREF74 dataset as the benchmark, there are three main datasets that are commonly used for evaluation of proposed GAN models for text-to-image synthesis: CUB BIBREF75, Oxford BIBREF76, COCO BIBREF77, and CIFAR-10 BIBREF78. CUB BIBREF75 contains 200 birds with matching text descriptions and Oxford BIBREF76 contains 102 categories of flowers with 40-258 images each and matching text descriptions. These datasets contain individual objects, with the text description corresponding to that object, making them relatively simple. COCO BIBREF77 is much more complex, containing 328k images with 91 different object types. CIFAI-10 BIBREF78 dataset consists of 60000 32$times$32 colour images in 10 classes, with 6000 images per class. In contrast to CUB and Oxford, whose images each contain an individual object, COCO’s images may contain multiple objects, each with a label, so there are many labels per image. The total number of labels over the 328k images is 2.5 million BIBREF77.
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Text-to-image Synthesis Benchmark Evaluation Metrics
Several evaluation metrics are used for judging the images produced by text-to-image GANs. Proposed by BIBREF25, Inception Scores (IS) calculates the entropy (randomness) of the conditional distribution, obtained by applying the Inception Model introduced in BIBREF79, and marginal distribution of a large set of generated images, which should be low and high, respectively, for meaningful images. Low entropy of conditional distribution means that the evaluator is confident that the images came from the data distribution, and high entropy of the marginal distribution means that the set of generated images is diverse, which are both desired features. The IS score is then computed as the KL-divergence between the two entropies. FCN-scores BIBREF2 are computed in a similar manner, relying on the intuition that realistic images generated by a GAN should be able to be classified correctly by a classifier trained on real images of the same distribution. Therefore, if the FCN classifier classifies a set of synthetic images accurately, the image is probably realistic, and the corresponding GAN gets a high FCN score. Frechet Inception Distance (FID) BIBREF80 is the other commonly used evaluation metric, and takes a different approach, actually comparing the generated images to real images in the distribution. A high FID means there is little relationship between statistics of the synthetic and real images and vice versa, so lower FIDs are better.
black The performance of different GANs with respect to the benchmark datasets and performance metrics is reported in Table TABREF48. In addition, Figure FIGREF49 further lists the performance of 14 GANs with respect to their Inception Scores (IS).
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: GAN Based Text-to-image Synthesis Results Comparison
While we gathered all the data we could find on scores for each model on the CUB, Oxford, and COCO datasets using IS, FID, FCN, and human classifiers, we unfortunately were unable to find certain data for AttnGAN and HDGAN (missing in Table TABREF48). The best evaluation we can give for those with missing data is our own opinions by looking at examples of generated images provided in their papers. In this regard, we observed that HDGAN produced relatively better visual results on the CUB and Oxford datasets while AttnGAN produced far more impressive results than the rest on the more complex COCO dataset. This is evidence that the attentional model and DAMSM introduced by AttnGAN are very effective in producing high-quality images. Examples of the best results of birds and plates of vegetables generated by each model are presented in Figures FIGREF50 and FIGREF51, respectively.
blackIn terms of inception score (IS), which is the metric that was applied to majority models except DC-GAN, the results in Table TABREF48 show that StackGAN++ only showed slight improvement over its predecessor, StackGAN, for text-to-image synthesis. However, StackGAN++ did introduce a very worthy enhancement for unconditional image generation by organizing the generators and discriminators in a “tree-like” structure. This indicates that revising the structures of the discriminators and/or generators can bring a moderate level of improvement in text-to-image synthesis.
blackIn addition, the results in Table TABREF48 also show that DM-GAN BIBREF53 has the best performance, followed by Obj-GAN BIBREF81. Notice that both DM-GAN and Obj-GAN are most recently developed methods in the field (both published in 2019), indicating that research in text to image synthesis is continuously improving the results for better visual perception and interception. Technical wise, DM-GAN BIBREF53 is a model using dynamic memory to refine fuzzy image contents initially generated from the GAN networks. A memory writing gate is used for DM-GAN to select important text information and generate images based on he selected text accordingly. On the other hand, Obj-GAN BIBREF81 focuses on object centered text-to-image synthesis. The proposed framework of Obj-GAN consists of a layout generation, including a bounding box generator and a shape generator, and an object-driven attentive image generator. The designs and advancement of DM-GAN and Obj-GAN indicate that research in text-to-image synthesis is advancing to put more emphasis on the image details and text semantics for better understanding and perception.
GAN Based Text-to-Image Synthesis Applications, Benchmark, and Evaluation and Comparisons ::: Notable Mentions
It is worth noting that although this survey mainly focuses on text-to-image synthesis, there have been other applications of GANs in broader image synthesis field that we found fascinating and worth dedicating a small section to. For example, BIBREF72 used Sem-Latent GANs to generate images of faces based on facial attributes, producing impressive results that, at a glance, could be mistaken for real faces. BIBREF82, BIBREF70, and BIBREF83 demonstrated great success in generating text descriptions from images (image captioning) with great accuracy, with BIBREF82 using an attention-based model that automatically learns to focus on salient objects and BIBREF83 using deep visual-semantic alignments. Finally, there is a contribution made by StackGAN++ that was not mentioned in the dedicated section due to its relation to unconditional image generation as opposed to conditional, namely a color-regularization term BIBREF47. This additional term aims to keep the samples generated from the same input at different stages more consistent in color, which resulted in significantly better results for the unconditional model.
Conclusion
The recent advancement in text-to-image synthesis research opens the door to several compelling methods and architectures. The main objective of text-to-image synthesis initially was to create images from simple labels, and this objective later scaled to natural languages. In this paper, we reviewed novel methods that generate, in our opinion, the most visually-rich and photo-realistic images, from text-based natural language. These generated images often rely on generative adversarial networks (GANs), deep convolutional decoder networks, and multimodal learning methods.
blackIn the paper, we first proposed a taxonomy to organize GAN based text-to-image synthesis frameworks into four major groups: semantic enhancement GANs, resolution enhancement GANs, diversity enhancement GANs, and motion enhancement GANs. The taxonomy provides a clear roadmap to show the motivations, architectures, and difference of different methods, and also outlines their evolution timeline and relationships. Following the proposed taxonomy, we reviewed important features of each method and their architectures. We indicated the model definition and key contributions from some advanced GAN framworks, including StackGAN, StackGAN++, AttnGAN, DC-GAN, AC-GAN, TAC-GAN, HDGAN, Text-SeGAn, StoryGAN etc. Many of the solutions surveyed in this paper tackled the highly complex challenge of generating photo-realistic images beyond swatch size samples. In other words, beyond the work of BIBREF8 in which images were generated from text in 64$\times $64 tiny swatches. Lastly, all methods were evaluated on datasets that included birds, flowers, humans, and other miscellaneous elements. We were also able to allocate some important papers that were as impressive as the papers we finally surveyed. Though, these notable papers have yet to contribute directly or indirectly to the expansion of the vast computer vision AI field. Looking into the future, an excellent extension from the works surveyed in this paper would be to give more independence to the several learning methods (e.g. less human intervention) involved in the studies as well as increasing the size of the output images.
conflict of interest
The authors declare that there is no conflict of interest regarding the publication of this article. | Semantic Enhancement GANs: DC-GANs, MC-GAN
Resolution Enhancement GANs: StackGANs, AttnGAN, HDGAN
Diversity Enhancement GANs: AC-GAN, TAC-GAN etc.
Motion Enhancement GAGs: T2S, T2V, StoryGAN |
7fe48939ce341212c1d801095517dc552b98e7b3 | 7fe48939ce341212c1d801095517dc552b98e7b3_0 | Q: Where do they employ feature-wise sigmoid gating?
Text: Introduction
Incorporating sub-word structures like substrings, morphemes and characters to the creation of word representations significantly increases their quality as reflected both by intrinsic metrics and performance in a wide range of downstream tasks BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 .
The reason for this improvement is related to sub-word structures containing information that is usually ignored by standard word-level models. Indeed, when representing words as vectors extracted from a lookup table, semantically related words resulting from inflectional processes such as surf, surfing, and surfed, are treated as being independent from one another. Further, word-level embeddings do not account for derivational processes resulting in syntactically-similar words with different meanings such as break, breakable, and unbreakable. This causes derived words, which are usually less frequent, to have lower-quality (or no) vector representations.
Previous works have successfully combined character-level and word-level word representations, obtaining overall better results than using only word-level representations. For example BIBREF1 achieved state-of-the-art results in a machine translation task by representing unknown words as a composition of their characters. BIBREF4 created word representations by adding the vector representations of the words' surface forms and their morphemes ( INLINEFORM0 ), obtaining significant improvements on intrinsic evaluation tasks, word similarity and machine translation. BIBREF5 concatenated character-level and word-level representations for creating word representations, and then used them as input to their models for obtaining state-of-the-art results in Named Entity Recognition on several languages.
What these works have in common is that the models they describe first learn how to represent subword information, at character BIBREF1 , morpheme BIBREF4 , or substring BIBREF0 levels, and then combine these learned representations at the word level. The incorporation of information at a finer-grained hierarchy results in higher-quality modeling of rare words, morphological processes, and semantics BIBREF6 .
There is no consensus, however, on which combination method works better in which case, or how the choice of a combination method affects downstream performance, either measured intrinsically at the word level, or extrinsically at the sentence level.
In this paper we aim to provide some intuitions about how the choice of mechanism for combining character-level with word-level representations influences the quality of the final word representations, and the subsequent effect these have in the performance of downstream tasks. Our contributions are as follows:
Background
We are interested in studying different ways of combining word representations, obtained from different hierarchies, into a single word representation. Specifically, we want to study how combining word representations (1) taken directly from a word embedding lookup table, and (2) obtained from a function over the characters composing them, affects the quality of the final word representations.
Let INLINEFORM0 be a set, or vocabulary, of words with INLINEFORM1 elements, and INLINEFORM2 a vocabulary of characters with INLINEFORM3 elements. Further, let INLINEFORM4 be a sequence of words, and INLINEFORM5 be the sequence of characters composing INLINEFORM6 . Each token INLINEFORM7 can be represented as a vector INLINEFORM8 extracted directly from an embedding lookup table INLINEFORM9 , pre-trained or otherwise, and as a vector INLINEFORM10 built from the characters that compose it; in other words, INLINEFORM11 , where INLINEFORM12 is a function that maps a sequence of characters to a vector.
The methods for combining word and character-level representations we study, are of the form INLINEFORM0 where INLINEFORM1 is the final word representation.
Mapping Characters to Character-level Word Representations
The function INLINEFORM0 is composed of an embedding layer, an optional context function, and an aggregation function.
The embedding layer transforms each character INLINEFORM0 into a vector INLINEFORM1 of dimension INLINEFORM2 , by directly taking it from a trainable embedding lookup table INLINEFORM3 . We define the matrix representation of word INLINEFORM4 as INLINEFORM5 .
The context function takes INLINEFORM0 as input and returns a context-enriched matrix representation INLINEFORM1 , in which each INLINEFORM2 contains a measure of information about its context, and interactions with its neighbors. In particular, we chose to do this by feeding INLINEFORM3 to a BiLSTM BIBREF7 , BIBREF8 .
Informally, we can think of LSTM BIBREF10 as a function INLINEFORM0 that takes a matrix INLINEFORM1 as input and returns a context-enriched matrix representation INLINEFORM2 , where each INLINEFORM3 encodes information about the previous elements INLINEFORM4 .
A BiLSTM is simply composed of 2 LSTM, one that reads the input from left to right (forward), and another that does so from right to left (backward). The output of the forward and backward LSTM are INLINEFORM0 and INLINEFORM1 respectively. In the backward case the LSTM reads INLINEFORM2 first and INLINEFORM3 last, therefore INLINEFORM4 will encode the context from INLINEFORM5 .
The aggregation function takes the context-enriched matrix representation of word INLINEFORM0 for both directions, INLINEFORM1 and INLINEFORM2 , and returns a single vector INLINEFORM3 . To do so we followed BIBREF11 , and defined the character-level representation INLINEFORM4 of word INLINEFORM5 as the linear combination of the forward and backward last hidden states returned by the context function: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are trainable parameters, and INLINEFORM2 represents the concatenation operation between two vectors.
Combining Character and Word-level Representations
We tested three different methods for combining INLINEFORM0 with INLINEFORM1 : simple concatenation, a learned scalar gate BIBREF11 , and a learned vector gate (also referred to as feature-wise sigmoidal gate). Additionally, we compared these methods to two baselines: using pre-trained word vectors only, and using character-only features for representing words. See fig:methods for a visual description of the proposed methods.
word-only (w) considers only INLINEFORM0 and ignores INLINEFORM1 : DISPLAYFORM0
char-only (c) considers only INLINEFORM0 and ignores INLINEFORM1 : DISPLAYFORM0
concat (cat) concatenates both word and character-level representations: DISPLAYFORM0
scalar gate (sg) implements the scalar gating mechanism described by BIBREF11 : DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are trainable parameters, INLINEFORM2 , and INLINEFORM3 is the sigmoid function.
vector gate (vg): DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are trainable parameters, INLINEFORM2 , INLINEFORM3 is the element-wise sigmoid function, INLINEFORM4 is the element-wise product for vectors, and INLINEFORM5 is a vector of ones.
The vector gate is inspired by BIBREF11 and BIBREF12 , but is different to the former in that the gating mechanism acts upon each dimension of the word and character-level vectors, and different to the latter in that it does not rely on external sources of information for calculating the gating mechanism.
Finally, note that word only and char only are special cases of both gating mechanisms: INLINEFORM0 (scalar gate) and INLINEFORM1 (vector gate) correspond to word only; INLINEFORM2 and INLINEFORM3 correspond to char only.
Obtaining Sentence Representations
To enable sentence-level classification we need to obtain a sentence representation from the word vectors INLINEFORM0 . We achieved this by using a BiLSTM with max pooling, which was shown to be a good universal sentence encoding mechanism BIBREF13 .
Let INLINEFORM0 , be an input sentence and INLINEFORM1 its matrix representation, where each INLINEFORM2 was obtained by one of the methods described in subsec:methods. INLINEFORM3 is the context-enriched matrix representation of INLINEFORM4 obtained by feeding INLINEFORM5 to a BiLSTM of output dimension INLINEFORM6 . Lastly, INLINEFORM11 is the final sentence representation of INLINEFORM12 obtained by max-pooling INLINEFORM13 along the sequence dimension.
Finally, we initialized the word representations INLINEFORM0 using GloVe embeddings BIBREF14 , and fine-tuned them during training. Refer to app:hyperparams for details on the other hyperparameters we used.
Experimental Setup
We trained our models for solving the Natural Language Inference (NLI) task in two datasets, SNLI BIBREF15 and MultiNLI BIBREF16 , and validated them in each corresponding development set (including the matched and mismatched development sets of MultiNLI).
For each dataset-method combination we trained 7 models initialized with different random seeds, and saved each when it reached its best validation accuracy. We then evaluated the quality of each trained model's word representations INLINEFORM0 in 10 word similarity tasks, using the system created by BIBREF17 .
Finally, we fed these obtained word vectors to a BiLSTM with max-pooling and evaluated the final sentence representations in 11 downstream transfer tasks BIBREF13 , BIBREF18 .
Datasets
Word-level Semantic Similarity A desirable property of vector representations of words is that semantically similar words should have similar vector representations. Assessing whether a set of word representations possesses this quality is referred to as the semantic similarity task. This is the most widely-used evaluation method for evaluating word representations, despite its shortcomings BIBREF20 .
This task consists of comparing the similarity between word vectors measured by a distance metric (usually cosine distance), with a similarity score obtained from human judgements. High correlation between these similarities is an indicator of good performance.
A problem with this formulation though, is that the definition of “similarity” often confounds the meaning of both similarity and relatedness. For example, cup and tea are related but dissimilar words, and this type of distinction is not always clear BIBREF21 , BIBREF22 .
To face the previous problem, we tested our methods in a wide variety of datasets, including some that explicitly model relatedness (WS353R), some that explicitly consider similarity (WS353S, SimLex999, SimVerb3500), and some where the distinction is not clear (MEN, MTurk287, MTurk771, RG, WS353). We also included the RareWords (RW) dataset for evaluating the quality of rare word representations. See appendix:datasets for a more complete description of the datasets we used.
Sentence-level Evaluation Tasks Unlike word-level representations, there is no consensus on the desirable properties sentence representations should have. In response to this, BIBREF13 created SentEval, a sentence representation evaluation benchmark designed for assessing how well sentence representations perform in various downstream tasks BIBREF23 .
Some of the datasets included in SentEval correspond to sentiment classification (CR, MPQA, MR, SST2, and SST5), subjectivity classification (SUBJ), question-type classification (TREC), recognizing textual entailment (SICK E), estimating semantic relatedness (SICK R), and measuring textual semantic similarity (STS16, STSB). The datasets are described by BIBREF13 , and we provide pointers to their original sources in the appendix table:sentence-eval-datasets.
To evaluate these sentence representations SentEval trained a linear model on top of them, and evaluated their performance in the validation sets accompanying each dataset. The only exception was the STS16 task, in which our representations were evaluated directly.
Word Similarity
table:wordlevelresults shows the quality of word representations in terms of the correlation between word similarity scores obtained by the proposed models and word similarity scores defined by humans.
First, we can see that for each task, character only models had significantly worse performance than every other model trained on the same dataset. The most likely explanation for this is that these models are the only ones that need to learn word representations from scratch, since they have no access to the global semantic knowledge encoded by the GloVe embeddings.
Further, bold results show the overall trend that vector gates outperformed the other methods regardless of training dataset. This implies that learning how to combine character and word-level representations at the dimension level produces word vector representations that capture a notion of word similarity and relatedness that is closer to that of humans.
Additionally, results from the MNLI row in general, and underlined results in particular, show that training on MultiNLI produces word representations better at capturing word similarity. This is probably due to MultiNLI data being richer than that of SNLI. Indeed, MultiNLI data was gathered from various sources (novels, reports, letters, and telephone conversations, among others), rather than the single image captions dataset from which SNLI was created.
Exceptions to the previous rule are models evaluated in MEN and RW. The former case can be explained by the MEN dataset containing only words that appear as image labels in the ESP-Game and MIRFLICKR-1M image datasets BIBREF24 , and therefore having data that is more closely distributed to SNLI than to MultiNLI.
More notably, in the RareWords dataset BIBREF25 , the word only, concat, and scalar gate methods performed equally, despite having been trained in different datasets ( INLINEFORM0 ), and the char only method performed significantly worse when trained in MultiNLI. The vector gate, however, performed significantly better than its counterpart trained in SNLI. These facts provide evidence that this method is capable of capturing linguistic phenomena that the other methods are unable to model.
table:word-similarity-dataset lists the word-similarity datasets and their corresponding reference. As mentioned in subsec:datasets, all the word-similarity datasets contain pairs of words annotated with similarity or relatedness scores, although this difference is not always explicit. Below we provide some details for each.
MEN contains 3000 annotated word pairs with integer scores ranging from 0 to 50. Words correspond to image labels appearing in the ESP-Game and MIRFLICKR-1M image datasets.
MTurk287 contains 287 annotated pairs with scores ranging from 1.0 to 5.0. It was created from words appearing in both DBpedia and in news articles from The New York Times.
MTurk771 contains 771 annotated pairs with scores ranging from 1.0 to 5.0, with words having synonymy, holonymy or meronymy relationships sampled from WordNet BIBREF56 .
RG contains 65 annotated pairs with scores ranging from 0.0 to 4.0 representing “similarity of meaning”.
RW contains 2034 pairs of words annotated with similarity scores in a scale from 0 to 10. The words included in this dataset were obtained from Wikipedia based on their frequency, and later filtered depending on their WordNet synsets, including synonymy, hyperonymy, hyponymy, holonymy and meronymy. This dataset was created with the purpose of testing how well models can represent rare and complex words.
SimLex999 contains 999 word pairs annotated with similarity scores ranging from 0 to 10. In this case the authors explicitly considered similarity and not relatedness, addressing the shortcomings of datasets that do not, such as MEN and WS353. Words include nouns, adjectives and verbs.
SimVerb3500 contains 3500 verb pairs annotated with similarity scores ranging from 0 to 10. Verbs were obtained from the USF free association database BIBREF66 , and VerbNet BIBREF63 . This dataset was created to address the lack of representativity of verbs in SimLex999, and the fact that, at the time of creation, the best performing models had already surpassed inter-annotator agreement in verb similarity evaluation resources. Like SimLex999, this dataset also explicitly considers similarity as opposed to relatedness.
WS353 contains 353 word pairs annotated with similarity scores from 0 to 10.
WS353R is a subset of WS353 containing 252 word pairs annotated with relatedness scores. This dataset was created by asking humans to classify each WS353 word pair into one of the following classes: synonyms, antonyms, identical, hyperonym-hyponym, hyponym-hyperonym, holonym-meronym, meronym-holonym, and none-of-the-above. These annotations were later used to group the pairs into: similar pairs (synonyms, antonyms, identical, hyperonym-hyponym, and hyponym-hyperonym), related pairs (holonym-meronym, meronym-holonym, and none-of-the-above with a human similarity score greater than 5), and unrelated pairs (classified as none-of-the-above with a similarity score less than or equal to 5). This dataset is composed by the union of related and unrelated pairs.
WS353S is another subset of WS353 containing 203 word pairs annotated with similarity scores. This dataset is composed by the union of similar and unrelated pairs, as described previously.
Word Frequencies and Gating Values
fig:gatingviz shows that for more common words the vector gate mechanism tends to favor only a few dimensions while keeping a low average gating value across dimensions. On the other hand, values are greater and more homogeneous across dimensions in rarer words. Further, fig:freqvsgatevalue shows this mechanism assigns, on average, a greater gating value to less frequent words, confirming the findings by BIBREF11 , and BIBREF12 .
In other words, the less frequent the word, the more this mechanism allows the character-level representation to influence the final word representation, as shown by eq:vg. A possible interpretation of this result is that exploiting character information becomes increasingly necessary as word-level representations' quality decrease.
Another observable trend in both figures is that gating values tend to be low on average. Indeed, it is possible to see in fig:freqvsgatevalue that the average gating values range from INLINEFORM0 to INLINEFORM1 . This result corroborates the findings by BIBREF11 , stating that setting INLINEFORM2 in eq:scalar-gate, was better than setting it to higher values.
In summary, the gating mechanisms learn how to compensate the lack of expressivity of underrepresented words by selectively combining their representations with those of characters.
Sentence-level Evaluation
table:sentlevelresults shows the impact that different methods for combining character and word-level word representations have in the quality of the sentence representations produced by our models.
We can observe the same trend mentioned in subsec:word-similarity-eval, and highlighted by the difference between bold values, that models trained in MultiNLI performed better than those trained in SNLI at a statistically significant level, confirming the findings of BIBREF13 . In other words, training sentence encoders on MultiNLI yields more general sentence representations than doing so on SNLI.
The two exceptions to the previous trend, SICKE and SICKR, benefited more from models trained on SNLI. We hypothesize this is again due to both SNLI and SICK BIBREF26 having similar data distributions.
Additionally, there was no method that significantly outperformed the word only baseline in classification tasks. This means that the added expressivity offered by explicitly modeling characters, be it through concatenation or gating, was not significantly better than simply fine-tuning the pre-trained GloVe embeddings for this type of task. We hypothesize this is due to the conflation of two effects. First, the fact that morphological processes might not encode important information for solving these tasks; and second, that SNLI and MultiNLI belong to domains that are too dissimilar to the domains in which the sentence representations are being tested.
On the other hand, the vector gate significantly outperformed every other method in the STSB task when trained in both datasets, and in the STS16 task when trained in SNLI. This again hints at this method being capable of modeling phenomena at the word level, resulting in improved semantic representations at the sentence level.
Relationship Between Word- and Sentence-level Evaluation Tasks
It is clear that the better performance the vector gate had in word similarity tasks did not translate into overall better performance in downstream tasks. This confirms previous findings indicating that intrinsic word evaluation metrics are not good predictors of downstream performance BIBREF29 , BIBREF30 , BIBREF20 , BIBREF31 .
subfig:mnli-correlations shows that the word representations created by the vector gate trained in MultiNLI had positively-correlated results within several word-similarity tasks. This hints at the generality of the word representations created by this method when modeling similarity and relatedness.
However, the same cannot be said about sentence-level evaluation performance; there is no clear correlation between word similarity tasks and sentence-evaluation tasks. This is clearly illustrated by performance in the STSBenchmark, the only in which the vector gate was significantly superior, not being correlated with performance in any word-similarity dataset. This can be interpreted simply as word-level representations capturing word-similarity not being a sufficient condition for good performance in sentence-level tasks.
In general, fig:correlations shows that there are no general correlation effects spanning both training datasets and combination mechanisms. For example, subfig:snli-correlations shows that, for both word-only and concat models trained in SNLI, performance in word similarity tasks correlates positively with performance in most sentence evaluation tasks, however, this does not happen as clearly for the same models trained in MultiNLI (subfig:mnli-correlations).
Gating Mechanisms for Combining Characters and Word Representations
To the best of our knowledge, there are only two recent works that specifically study how to combine word and subword-level vector representations.
BIBREF11 propose to use a trainable scalar gating mechanism capable of learning a weighting scheme for combining character-level and word-level representations. They compared their proposed method to manually weighting both levels; using characters only; words only; or their concatenation. They found that in some datasets a specific manual weighting scheme performed better, while in others the learned scalar gate did.
BIBREF12 further expand the gating concept by making the mechanism work at a finer-grained level, learning how to weight each vector's dimensions independently, conditioned on external word-level features such as part-of-speech and named-entity tags. Similarly, they compared their proposed mechanism to using words only, characters only, and a concatenation of both, with and without external features. They found that their vector gate performed better than the other methods in all the reported tasks, and beat the state of the art in two reading comprehension tasks.
Both works showed that the gating mechanisms assigned greater importance to character-level representations in rare words, and to word-level representations in common ones, reaffirming the previous findings that subword structures in general, and characters in particular, are beneficial for modeling uncommon words.
Sentence Representation Learning
The problem of representing sentences as fixed-length vectors has been widely studied.
BIBREF32 suggested a self-adaptive hierarchical model that gradually composes words into intermediate phrase representations, and adaptively selects specific hierarchical levels for specific tasks. BIBREF33 proposed an encoder-decoder model trained by attempting to reconstruct the surrounding sentences of an encoded passage, in a fashion similar to Skip-gram BIBREF34 . BIBREF35 overcame the previous model's need for ordered training sentences by using autoencoders for creating the sentence representations. BIBREF36 implemented a model simpler and faster to train than the previous two, while having competitive performance. Similar to BIBREF33 , BIBREF37 suggested predicting future sentences with a hierarchical CNN-LSTM encoder.
BIBREF13 trained several sentence encoding architectures on a combination of the SNLI and MultiNLI datasets, and showed that a BiLSTM with max-pooling was the best at producing highly transferable sentence representations. More recently, BIBREF18 empirically showed that sentence representations created in a multi-task setting BIBREF38 , performed increasingly better the more tasks they were trained in. BIBREF39 proposed using an autoencoder that relies on multi-head self-attention over the concatenation of the max and mean pooled encoder outputs for producing sentence representations. Finally, BIBREF40 show that modern sentence embedding methods are not vastly superior to random methods.
The works mentioned so far usually evaluate the quality of the produced sentence representations in sentence-level downstream tasks. Common benchmarks grouping these kind of tasks include SentEval BIBREF23 , and GLUE BIBREF41 . Another trend, however, is to probe sentence representations to understand what linguistic phenomena they encode BIBREF42 , BIBREF43 , BIBREF44 , BIBREF45 , BIBREF46 .
General Feature-wise Transformations
BIBREF47 provide a review on feature-wise transformation methods, of which the mechanisms presented in this paper form a part of. In a few words, the INLINEFORM0 parameter, in both scalar gate and vector gate mechanisms, can be understood as a scaling parameter limited to the INLINEFORM1 range and conditioned on word representations, whereas adding the scaled INLINEFORM2 and INLINEFORM3 representations can be seen as biasing word representations conditioned on character representations.
The previous review extends the work by BIBREF48 , which describes the Feature-wise Linear Modulation (FiLM) framework as a generalization of Conditional Normalization methods, and apply it in visual reasoning tasks. Some of the reported findings are that, in general, scaling has greater impact than biasing, and that in a setting similar to the scalar gate, limiting the scaling parameter to INLINEFORM0 hurt performance. Future decisions involving the design of mechanisms for combining character and word-level representations should be informed by these insights.
Conclusions
We presented an empirical study showing the effect that different ways of combining character and word representations has in word-level and sentence-level evaluation tasks.
We showed that a vector gate performed consistently better across a variety of word similarity and relatedness tasks. Additionally, despite showing inconsistent results in sentence evaluation tasks, it performed significantly better than the other methods in semantic similarity tasks.
We further showed through this mechanism, that learning character-level representations is always beneficial, and becomes increasingly so with less common words.
In the future it would be interesting to study how the choice of mechanism for combining subword and word representations affects the more recent language-model-based pretraining methods such as ELMo BIBREF49 , GPT BIBREF50 , BIBREF51 and BERT BIBREF52 .
Acknowledgements
Thanks to Edison Marrese-Taylor and Pablo Loyola for their feedback on early versions of this manuscript. We also gratefully acknowledge the support of the NVIDIA Corporation with the donation of one of the GPUs used for this research. Jorge A. Balazs is partially supported by the Japanese Government MEXT Scholarship.
Hyperparameters
We only considered words that appear at least twice, for each dataset. Those that appeared only once were considered UNK. We used the Treebank Word Tokenizer as implemented in NLTK for tokenizing the training and development datasets.
In the same fashion as conneau2017supervised, we used a batch size of 64, an SGD optmizer with an initial learning rate of INLINEFORM0 , and at each epoch divided the learning rate by 5 if the validation accuracy decreased. We also used gradient clipping when gradients where INLINEFORM1 .
We defined character vector representations as 50-dimensional vectors randomly initialized by sampling from the uniform distribution in the INLINEFORM0 range.
The output dimension of the character-level BiLSTM was 300 per direction, and remained of such size after combining forward and backward representations as depicted in eq. EQREF9 .
Word vector representations where initialized from the 300-dimensional GloVe vectors BIBREF14 , trained in 840B tokens from the Common Crawl, and finetuned during training. Words not present in the GloVe vocabulary where randomly initialized by sampling from the uniform distribution in the INLINEFORM0 range.
The input size of the word-level LSTM was 300 for every method except concat in which it was 600, and its output was always 2048 per direction, resulting in a 4096-dimensional sentence representation.
Sentence Evaluation Datasets
table:sentence-eval-datasets lists the sentence-level evaluation datasets used in this paper. The provided URLs correspond to the original sources, and not necessarily to the URLs where SentEval got the data from.
The version of the CR, MPQA, MR, and SUBJ datasets used in this paper were the ones preprocessed by BIBREF75 . Both SST2 and SST5 correspond to preprocessed versions of the SST dataset by BIBREF74 . SST2 corresponds to a subset of SST used by BIBREF54 containing flat representations of sentences annotated with binary sentiment labels, and SST5 to another subset annotated with more fine-grained sentiment labels (very negative, negative, neutral, positive, very positive). | gating mechanism acts upon each dimension of the word and character-level vectors |
65ad17f614b7345f0077424c04c94971c831585b | 65ad17f614b7345f0077424c04c94971c831585b_0 | Q: Which model architecture do they use to obtain representations?
Text: Introduction
Incorporating sub-word structures like substrings, morphemes and characters to the creation of word representations significantly increases their quality as reflected both by intrinsic metrics and performance in a wide range of downstream tasks BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 .
The reason for this improvement is related to sub-word structures containing information that is usually ignored by standard word-level models. Indeed, when representing words as vectors extracted from a lookup table, semantically related words resulting from inflectional processes such as surf, surfing, and surfed, are treated as being independent from one another. Further, word-level embeddings do not account for derivational processes resulting in syntactically-similar words with different meanings such as break, breakable, and unbreakable. This causes derived words, which are usually less frequent, to have lower-quality (or no) vector representations.
Previous works have successfully combined character-level and word-level word representations, obtaining overall better results than using only word-level representations. For example BIBREF1 achieved state-of-the-art results in a machine translation task by representing unknown words as a composition of their characters. BIBREF4 created word representations by adding the vector representations of the words' surface forms and their morphemes ( INLINEFORM0 ), obtaining significant improvements on intrinsic evaluation tasks, word similarity and machine translation. BIBREF5 concatenated character-level and word-level representations for creating word representations, and then used them as input to their models for obtaining state-of-the-art results in Named Entity Recognition on several languages.
What these works have in common is that the models they describe first learn how to represent subword information, at character BIBREF1 , morpheme BIBREF4 , or substring BIBREF0 levels, and then combine these learned representations at the word level. The incorporation of information at a finer-grained hierarchy results in higher-quality modeling of rare words, morphological processes, and semantics BIBREF6 .
There is no consensus, however, on which combination method works better in which case, or how the choice of a combination method affects downstream performance, either measured intrinsically at the word level, or extrinsically at the sentence level.
In this paper we aim to provide some intuitions about how the choice of mechanism for combining character-level with word-level representations influences the quality of the final word representations, and the subsequent effect these have in the performance of downstream tasks. Our contributions are as follows:
Background
We are interested in studying different ways of combining word representations, obtained from different hierarchies, into a single word representation. Specifically, we want to study how combining word representations (1) taken directly from a word embedding lookup table, and (2) obtained from a function over the characters composing them, affects the quality of the final word representations.
Let INLINEFORM0 be a set, or vocabulary, of words with INLINEFORM1 elements, and INLINEFORM2 a vocabulary of characters with INLINEFORM3 elements. Further, let INLINEFORM4 be a sequence of words, and INLINEFORM5 be the sequence of characters composing INLINEFORM6 . Each token INLINEFORM7 can be represented as a vector INLINEFORM8 extracted directly from an embedding lookup table INLINEFORM9 , pre-trained or otherwise, and as a vector INLINEFORM10 built from the characters that compose it; in other words, INLINEFORM11 , where INLINEFORM12 is a function that maps a sequence of characters to a vector.
The methods for combining word and character-level representations we study, are of the form INLINEFORM0 where INLINEFORM1 is the final word representation.
Mapping Characters to Character-level Word Representations
The function INLINEFORM0 is composed of an embedding layer, an optional context function, and an aggregation function.
The embedding layer transforms each character INLINEFORM0 into a vector INLINEFORM1 of dimension INLINEFORM2 , by directly taking it from a trainable embedding lookup table INLINEFORM3 . We define the matrix representation of word INLINEFORM4 as INLINEFORM5 .
The context function takes INLINEFORM0 as input and returns a context-enriched matrix representation INLINEFORM1 , in which each INLINEFORM2 contains a measure of information about its context, and interactions with its neighbors. In particular, we chose to do this by feeding INLINEFORM3 to a BiLSTM BIBREF7 , BIBREF8 .
Informally, we can think of LSTM BIBREF10 as a function INLINEFORM0 that takes a matrix INLINEFORM1 as input and returns a context-enriched matrix representation INLINEFORM2 , where each INLINEFORM3 encodes information about the previous elements INLINEFORM4 .
A BiLSTM is simply composed of 2 LSTM, one that reads the input from left to right (forward), and another that does so from right to left (backward). The output of the forward and backward LSTM are INLINEFORM0 and INLINEFORM1 respectively. In the backward case the LSTM reads INLINEFORM2 first and INLINEFORM3 last, therefore INLINEFORM4 will encode the context from INLINEFORM5 .
The aggregation function takes the context-enriched matrix representation of word INLINEFORM0 for both directions, INLINEFORM1 and INLINEFORM2 , and returns a single vector INLINEFORM3 . To do so we followed BIBREF11 , and defined the character-level representation INLINEFORM4 of word INLINEFORM5 as the linear combination of the forward and backward last hidden states returned by the context function: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are trainable parameters, and INLINEFORM2 represents the concatenation operation between two vectors.
Combining Character and Word-level Representations
We tested three different methods for combining INLINEFORM0 with INLINEFORM1 : simple concatenation, a learned scalar gate BIBREF11 , and a learned vector gate (also referred to as feature-wise sigmoidal gate). Additionally, we compared these methods to two baselines: using pre-trained word vectors only, and using character-only features for representing words. See fig:methods for a visual description of the proposed methods.
word-only (w) considers only INLINEFORM0 and ignores INLINEFORM1 : DISPLAYFORM0
char-only (c) considers only INLINEFORM0 and ignores INLINEFORM1 : DISPLAYFORM0
concat (cat) concatenates both word and character-level representations: DISPLAYFORM0
scalar gate (sg) implements the scalar gating mechanism described by BIBREF11 : DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are trainable parameters, INLINEFORM2 , and INLINEFORM3 is the sigmoid function.
vector gate (vg): DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are trainable parameters, INLINEFORM2 , INLINEFORM3 is the element-wise sigmoid function, INLINEFORM4 is the element-wise product for vectors, and INLINEFORM5 is a vector of ones.
The vector gate is inspired by BIBREF11 and BIBREF12 , but is different to the former in that the gating mechanism acts upon each dimension of the word and character-level vectors, and different to the latter in that it does not rely on external sources of information for calculating the gating mechanism.
Finally, note that word only and char only are special cases of both gating mechanisms: INLINEFORM0 (scalar gate) and INLINEFORM1 (vector gate) correspond to word only; INLINEFORM2 and INLINEFORM3 correspond to char only.
Obtaining Sentence Representations
To enable sentence-level classification we need to obtain a sentence representation from the word vectors INLINEFORM0 . We achieved this by using a BiLSTM with max pooling, which was shown to be a good universal sentence encoding mechanism BIBREF13 .
Let INLINEFORM0 , be an input sentence and INLINEFORM1 its matrix representation, where each INLINEFORM2 was obtained by one of the methods described in subsec:methods. INLINEFORM3 is the context-enriched matrix representation of INLINEFORM4 obtained by feeding INLINEFORM5 to a BiLSTM of output dimension INLINEFORM6 . Lastly, INLINEFORM11 is the final sentence representation of INLINEFORM12 obtained by max-pooling INLINEFORM13 along the sequence dimension.
Finally, we initialized the word representations INLINEFORM0 using GloVe embeddings BIBREF14 , and fine-tuned them during training. Refer to app:hyperparams for details on the other hyperparameters we used.
Experimental Setup
We trained our models for solving the Natural Language Inference (NLI) task in two datasets, SNLI BIBREF15 and MultiNLI BIBREF16 , and validated them in each corresponding development set (including the matched and mismatched development sets of MultiNLI).
For each dataset-method combination we trained 7 models initialized with different random seeds, and saved each when it reached its best validation accuracy. We then evaluated the quality of each trained model's word representations INLINEFORM0 in 10 word similarity tasks, using the system created by BIBREF17 .
Finally, we fed these obtained word vectors to a BiLSTM with max-pooling and evaluated the final sentence representations in 11 downstream transfer tasks BIBREF13 , BIBREF18 .
Datasets
Word-level Semantic Similarity A desirable property of vector representations of words is that semantically similar words should have similar vector representations. Assessing whether a set of word representations possesses this quality is referred to as the semantic similarity task. This is the most widely-used evaluation method for evaluating word representations, despite its shortcomings BIBREF20 .
This task consists of comparing the similarity between word vectors measured by a distance metric (usually cosine distance), with a similarity score obtained from human judgements. High correlation between these similarities is an indicator of good performance.
A problem with this formulation though, is that the definition of “similarity” often confounds the meaning of both similarity and relatedness. For example, cup and tea are related but dissimilar words, and this type of distinction is not always clear BIBREF21 , BIBREF22 .
To face the previous problem, we tested our methods in a wide variety of datasets, including some that explicitly model relatedness (WS353R), some that explicitly consider similarity (WS353S, SimLex999, SimVerb3500), and some where the distinction is not clear (MEN, MTurk287, MTurk771, RG, WS353). We also included the RareWords (RW) dataset for evaluating the quality of rare word representations. See appendix:datasets for a more complete description of the datasets we used.
Sentence-level Evaluation Tasks Unlike word-level representations, there is no consensus on the desirable properties sentence representations should have. In response to this, BIBREF13 created SentEval, a sentence representation evaluation benchmark designed for assessing how well sentence representations perform in various downstream tasks BIBREF23 .
Some of the datasets included in SentEval correspond to sentiment classification (CR, MPQA, MR, SST2, and SST5), subjectivity classification (SUBJ), question-type classification (TREC), recognizing textual entailment (SICK E), estimating semantic relatedness (SICK R), and measuring textual semantic similarity (STS16, STSB). The datasets are described by BIBREF13 , and we provide pointers to their original sources in the appendix table:sentence-eval-datasets.
To evaluate these sentence representations SentEval trained a linear model on top of them, and evaluated their performance in the validation sets accompanying each dataset. The only exception was the STS16 task, in which our representations were evaluated directly.
Word Similarity
table:wordlevelresults shows the quality of word representations in terms of the correlation between word similarity scores obtained by the proposed models and word similarity scores defined by humans.
First, we can see that for each task, character only models had significantly worse performance than every other model trained on the same dataset. The most likely explanation for this is that these models are the only ones that need to learn word representations from scratch, since they have no access to the global semantic knowledge encoded by the GloVe embeddings.
Further, bold results show the overall trend that vector gates outperformed the other methods regardless of training dataset. This implies that learning how to combine character and word-level representations at the dimension level produces word vector representations that capture a notion of word similarity and relatedness that is closer to that of humans.
Additionally, results from the MNLI row in general, and underlined results in particular, show that training on MultiNLI produces word representations better at capturing word similarity. This is probably due to MultiNLI data being richer than that of SNLI. Indeed, MultiNLI data was gathered from various sources (novels, reports, letters, and telephone conversations, among others), rather than the single image captions dataset from which SNLI was created.
Exceptions to the previous rule are models evaluated in MEN and RW. The former case can be explained by the MEN dataset containing only words that appear as image labels in the ESP-Game and MIRFLICKR-1M image datasets BIBREF24 , and therefore having data that is more closely distributed to SNLI than to MultiNLI.
More notably, in the RareWords dataset BIBREF25 , the word only, concat, and scalar gate methods performed equally, despite having been trained in different datasets ( INLINEFORM0 ), and the char only method performed significantly worse when trained in MultiNLI. The vector gate, however, performed significantly better than its counterpart trained in SNLI. These facts provide evidence that this method is capable of capturing linguistic phenomena that the other methods are unable to model.
table:word-similarity-dataset lists the word-similarity datasets and their corresponding reference. As mentioned in subsec:datasets, all the word-similarity datasets contain pairs of words annotated with similarity or relatedness scores, although this difference is not always explicit. Below we provide some details for each.
MEN contains 3000 annotated word pairs with integer scores ranging from 0 to 50. Words correspond to image labels appearing in the ESP-Game and MIRFLICKR-1M image datasets.
MTurk287 contains 287 annotated pairs with scores ranging from 1.0 to 5.0. It was created from words appearing in both DBpedia and in news articles from The New York Times.
MTurk771 contains 771 annotated pairs with scores ranging from 1.0 to 5.0, with words having synonymy, holonymy or meronymy relationships sampled from WordNet BIBREF56 .
RG contains 65 annotated pairs with scores ranging from 0.0 to 4.0 representing “similarity of meaning”.
RW contains 2034 pairs of words annotated with similarity scores in a scale from 0 to 10. The words included in this dataset were obtained from Wikipedia based on their frequency, and later filtered depending on their WordNet synsets, including synonymy, hyperonymy, hyponymy, holonymy and meronymy. This dataset was created with the purpose of testing how well models can represent rare and complex words.
SimLex999 contains 999 word pairs annotated with similarity scores ranging from 0 to 10. In this case the authors explicitly considered similarity and not relatedness, addressing the shortcomings of datasets that do not, such as MEN and WS353. Words include nouns, adjectives and verbs.
SimVerb3500 contains 3500 verb pairs annotated with similarity scores ranging from 0 to 10. Verbs were obtained from the USF free association database BIBREF66 , and VerbNet BIBREF63 . This dataset was created to address the lack of representativity of verbs in SimLex999, and the fact that, at the time of creation, the best performing models had already surpassed inter-annotator agreement in verb similarity evaluation resources. Like SimLex999, this dataset also explicitly considers similarity as opposed to relatedness.
WS353 contains 353 word pairs annotated with similarity scores from 0 to 10.
WS353R is a subset of WS353 containing 252 word pairs annotated with relatedness scores. This dataset was created by asking humans to classify each WS353 word pair into one of the following classes: synonyms, antonyms, identical, hyperonym-hyponym, hyponym-hyperonym, holonym-meronym, meronym-holonym, and none-of-the-above. These annotations were later used to group the pairs into: similar pairs (synonyms, antonyms, identical, hyperonym-hyponym, and hyponym-hyperonym), related pairs (holonym-meronym, meronym-holonym, and none-of-the-above with a human similarity score greater than 5), and unrelated pairs (classified as none-of-the-above with a similarity score less than or equal to 5). This dataset is composed by the union of related and unrelated pairs.
WS353S is another subset of WS353 containing 203 word pairs annotated with similarity scores. This dataset is composed by the union of similar and unrelated pairs, as described previously.
Word Frequencies and Gating Values
fig:gatingviz shows that for more common words the vector gate mechanism tends to favor only a few dimensions while keeping a low average gating value across dimensions. On the other hand, values are greater and more homogeneous across dimensions in rarer words. Further, fig:freqvsgatevalue shows this mechanism assigns, on average, a greater gating value to less frequent words, confirming the findings by BIBREF11 , and BIBREF12 .
In other words, the less frequent the word, the more this mechanism allows the character-level representation to influence the final word representation, as shown by eq:vg. A possible interpretation of this result is that exploiting character information becomes increasingly necessary as word-level representations' quality decrease.
Another observable trend in both figures is that gating values tend to be low on average. Indeed, it is possible to see in fig:freqvsgatevalue that the average gating values range from INLINEFORM0 to INLINEFORM1 . This result corroborates the findings by BIBREF11 , stating that setting INLINEFORM2 in eq:scalar-gate, was better than setting it to higher values.
In summary, the gating mechanisms learn how to compensate the lack of expressivity of underrepresented words by selectively combining their representations with those of characters.
Sentence-level Evaluation
table:sentlevelresults shows the impact that different methods for combining character and word-level word representations have in the quality of the sentence representations produced by our models.
We can observe the same trend mentioned in subsec:word-similarity-eval, and highlighted by the difference between bold values, that models trained in MultiNLI performed better than those trained in SNLI at a statistically significant level, confirming the findings of BIBREF13 . In other words, training sentence encoders on MultiNLI yields more general sentence representations than doing so on SNLI.
The two exceptions to the previous trend, SICKE and SICKR, benefited more from models trained on SNLI. We hypothesize this is again due to both SNLI and SICK BIBREF26 having similar data distributions.
Additionally, there was no method that significantly outperformed the word only baseline in classification tasks. This means that the added expressivity offered by explicitly modeling characters, be it through concatenation or gating, was not significantly better than simply fine-tuning the pre-trained GloVe embeddings for this type of task. We hypothesize this is due to the conflation of two effects. First, the fact that morphological processes might not encode important information for solving these tasks; and second, that SNLI and MultiNLI belong to domains that are too dissimilar to the domains in which the sentence representations are being tested.
On the other hand, the vector gate significantly outperformed every other method in the STSB task when trained in both datasets, and in the STS16 task when trained in SNLI. This again hints at this method being capable of modeling phenomena at the word level, resulting in improved semantic representations at the sentence level.
Relationship Between Word- and Sentence-level Evaluation Tasks
It is clear that the better performance the vector gate had in word similarity tasks did not translate into overall better performance in downstream tasks. This confirms previous findings indicating that intrinsic word evaluation metrics are not good predictors of downstream performance BIBREF29 , BIBREF30 , BIBREF20 , BIBREF31 .
subfig:mnli-correlations shows that the word representations created by the vector gate trained in MultiNLI had positively-correlated results within several word-similarity tasks. This hints at the generality of the word representations created by this method when modeling similarity and relatedness.
However, the same cannot be said about sentence-level evaluation performance; there is no clear correlation between word similarity tasks and sentence-evaluation tasks. This is clearly illustrated by performance in the STSBenchmark, the only in which the vector gate was significantly superior, not being correlated with performance in any word-similarity dataset. This can be interpreted simply as word-level representations capturing word-similarity not being a sufficient condition for good performance in sentence-level tasks.
In general, fig:correlations shows that there are no general correlation effects spanning both training datasets and combination mechanisms. For example, subfig:snli-correlations shows that, for both word-only and concat models trained in SNLI, performance in word similarity tasks correlates positively with performance in most sentence evaluation tasks, however, this does not happen as clearly for the same models trained in MultiNLI (subfig:mnli-correlations).
Gating Mechanisms for Combining Characters and Word Representations
To the best of our knowledge, there are only two recent works that specifically study how to combine word and subword-level vector representations.
BIBREF11 propose to use a trainable scalar gating mechanism capable of learning a weighting scheme for combining character-level and word-level representations. They compared their proposed method to manually weighting both levels; using characters only; words only; or their concatenation. They found that in some datasets a specific manual weighting scheme performed better, while in others the learned scalar gate did.
BIBREF12 further expand the gating concept by making the mechanism work at a finer-grained level, learning how to weight each vector's dimensions independently, conditioned on external word-level features such as part-of-speech and named-entity tags. Similarly, they compared their proposed mechanism to using words only, characters only, and a concatenation of both, with and without external features. They found that their vector gate performed better than the other methods in all the reported tasks, and beat the state of the art in two reading comprehension tasks.
Both works showed that the gating mechanisms assigned greater importance to character-level representations in rare words, and to word-level representations in common ones, reaffirming the previous findings that subword structures in general, and characters in particular, are beneficial for modeling uncommon words.
Sentence Representation Learning
The problem of representing sentences as fixed-length vectors has been widely studied.
BIBREF32 suggested a self-adaptive hierarchical model that gradually composes words into intermediate phrase representations, and adaptively selects specific hierarchical levels for specific tasks. BIBREF33 proposed an encoder-decoder model trained by attempting to reconstruct the surrounding sentences of an encoded passage, in a fashion similar to Skip-gram BIBREF34 . BIBREF35 overcame the previous model's need for ordered training sentences by using autoencoders for creating the sentence representations. BIBREF36 implemented a model simpler and faster to train than the previous two, while having competitive performance. Similar to BIBREF33 , BIBREF37 suggested predicting future sentences with a hierarchical CNN-LSTM encoder.
BIBREF13 trained several sentence encoding architectures on a combination of the SNLI and MultiNLI datasets, and showed that a BiLSTM with max-pooling was the best at producing highly transferable sentence representations. More recently, BIBREF18 empirically showed that sentence representations created in a multi-task setting BIBREF38 , performed increasingly better the more tasks they were trained in. BIBREF39 proposed using an autoencoder that relies on multi-head self-attention over the concatenation of the max and mean pooled encoder outputs for producing sentence representations. Finally, BIBREF40 show that modern sentence embedding methods are not vastly superior to random methods.
The works mentioned so far usually evaluate the quality of the produced sentence representations in sentence-level downstream tasks. Common benchmarks grouping these kind of tasks include SentEval BIBREF23 , and GLUE BIBREF41 . Another trend, however, is to probe sentence representations to understand what linguistic phenomena they encode BIBREF42 , BIBREF43 , BIBREF44 , BIBREF45 , BIBREF46 .
General Feature-wise Transformations
BIBREF47 provide a review on feature-wise transformation methods, of which the mechanisms presented in this paper form a part of. In a few words, the INLINEFORM0 parameter, in both scalar gate and vector gate mechanisms, can be understood as a scaling parameter limited to the INLINEFORM1 range and conditioned on word representations, whereas adding the scaled INLINEFORM2 and INLINEFORM3 representations can be seen as biasing word representations conditioned on character representations.
The previous review extends the work by BIBREF48 , which describes the Feature-wise Linear Modulation (FiLM) framework as a generalization of Conditional Normalization methods, and apply it in visual reasoning tasks. Some of the reported findings are that, in general, scaling has greater impact than biasing, and that in a setting similar to the scalar gate, limiting the scaling parameter to INLINEFORM0 hurt performance. Future decisions involving the design of mechanisms for combining character and word-level representations should be informed by these insights.
Conclusions
We presented an empirical study showing the effect that different ways of combining character and word representations has in word-level and sentence-level evaluation tasks.
We showed that a vector gate performed consistently better across a variety of word similarity and relatedness tasks. Additionally, despite showing inconsistent results in sentence evaluation tasks, it performed significantly better than the other methods in semantic similarity tasks.
We further showed through this mechanism, that learning character-level representations is always beneficial, and becomes increasingly so with less common words.
In the future it would be interesting to study how the choice of mechanism for combining subword and word representations affects the more recent language-model-based pretraining methods such as ELMo BIBREF49 , GPT BIBREF50 , BIBREF51 and BERT BIBREF52 .
Acknowledgements
Thanks to Edison Marrese-Taylor and Pablo Loyola for their feedback on early versions of this manuscript. We also gratefully acknowledge the support of the NVIDIA Corporation with the donation of one of the GPUs used for this research. Jorge A. Balazs is partially supported by the Japanese Government MEXT Scholarship.
Hyperparameters
We only considered words that appear at least twice, for each dataset. Those that appeared only once were considered UNK. We used the Treebank Word Tokenizer as implemented in NLTK for tokenizing the training and development datasets.
In the same fashion as conneau2017supervised, we used a batch size of 64, an SGD optmizer with an initial learning rate of INLINEFORM0 , and at each epoch divided the learning rate by 5 if the validation accuracy decreased. We also used gradient clipping when gradients where INLINEFORM1 .
We defined character vector representations as 50-dimensional vectors randomly initialized by sampling from the uniform distribution in the INLINEFORM0 range.
The output dimension of the character-level BiLSTM was 300 per direction, and remained of such size after combining forward and backward representations as depicted in eq. EQREF9 .
Word vector representations where initialized from the 300-dimensional GloVe vectors BIBREF14 , trained in 840B tokens from the Common Crawl, and finetuned during training. Words not present in the GloVe vocabulary where randomly initialized by sampling from the uniform distribution in the INLINEFORM0 range.
The input size of the word-level LSTM was 300 for every method except concat in which it was 600, and its output was always 2048 per direction, resulting in a 4096-dimensional sentence representation.
Sentence Evaluation Datasets
table:sentence-eval-datasets lists the sentence-level evaluation datasets used in this paper. The provided URLs correspond to the original sources, and not necessarily to the URLs where SentEval got the data from.
The version of the CR, MPQA, MR, and SUBJ datasets used in this paper were the ones preprocessed by BIBREF75 . Both SST2 and SST5 correspond to preprocessed versions of the SST dataset by BIBREF74 . SST2 corresponds to a subset of SST used by BIBREF54 containing flat representations of sentences annotated with binary sentiment labels, and SST5 to another subset annotated with more fine-grained sentiment labels (very negative, negative, neutral, positive, very positive). | BiLSTM with max pooling |
323e100a6c92d3fe503f7a93b96d821408f92109 | 323e100a6c92d3fe503f7a93b96d821408f92109_0 | Q: Which downstream sentence-level tasks do they evaluate on?
Text: Introduction
Incorporating sub-word structures like substrings, morphemes and characters to the creation of word representations significantly increases their quality as reflected both by intrinsic metrics and performance in a wide range of downstream tasks BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 .
The reason for this improvement is related to sub-word structures containing information that is usually ignored by standard word-level models. Indeed, when representing words as vectors extracted from a lookup table, semantically related words resulting from inflectional processes such as surf, surfing, and surfed, are treated as being independent from one another. Further, word-level embeddings do not account for derivational processes resulting in syntactically-similar words with different meanings such as break, breakable, and unbreakable. This causes derived words, which are usually less frequent, to have lower-quality (or no) vector representations.
Previous works have successfully combined character-level and word-level word representations, obtaining overall better results than using only word-level representations. For example BIBREF1 achieved state-of-the-art results in a machine translation task by representing unknown words as a composition of their characters. BIBREF4 created word representations by adding the vector representations of the words' surface forms and their morphemes ( INLINEFORM0 ), obtaining significant improvements on intrinsic evaluation tasks, word similarity and machine translation. BIBREF5 concatenated character-level and word-level representations for creating word representations, and then used them as input to their models for obtaining state-of-the-art results in Named Entity Recognition on several languages.
What these works have in common is that the models they describe first learn how to represent subword information, at character BIBREF1 , morpheme BIBREF4 , or substring BIBREF0 levels, and then combine these learned representations at the word level. The incorporation of information at a finer-grained hierarchy results in higher-quality modeling of rare words, morphological processes, and semantics BIBREF6 .
There is no consensus, however, on which combination method works better in which case, or how the choice of a combination method affects downstream performance, either measured intrinsically at the word level, or extrinsically at the sentence level.
In this paper we aim to provide some intuitions about how the choice of mechanism for combining character-level with word-level representations influences the quality of the final word representations, and the subsequent effect these have in the performance of downstream tasks. Our contributions are as follows:
Background
We are interested in studying different ways of combining word representations, obtained from different hierarchies, into a single word representation. Specifically, we want to study how combining word representations (1) taken directly from a word embedding lookup table, and (2) obtained from a function over the characters composing them, affects the quality of the final word representations.
Let INLINEFORM0 be a set, or vocabulary, of words with INLINEFORM1 elements, and INLINEFORM2 a vocabulary of characters with INLINEFORM3 elements. Further, let INLINEFORM4 be a sequence of words, and INLINEFORM5 be the sequence of characters composing INLINEFORM6 . Each token INLINEFORM7 can be represented as a vector INLINEFORM8 extracted directly from an embedding lookup table INLINEFORM9 , pre-trained or otherwise, and as a vector INLINEFORM10 built from the characters that compose it; in other words, INLINEFORM11 , where INLINEFORM12 is a function that maps a sequence of characters to a vector.
The methods for combining word and character-level representations we study, are of the form INLINEFORM0 where INLINEFORM1 is the final word representation.
Mapping Characters to Character-level Word Representations
The function INLINEFORM0 is composed of an embedding layer, an optional context function, and an aggregation function.
The embedding layer transforms each character INLINEFORM0 into a vector INLINEFORM1 of dimension INLINEFORM2 , by directly taking it from a trainable embedding lookup table INLINEFORM3 . We define the matrix representation of word INLINEFORM4 as INLINEFORM5 .
The context function takes INLINEFORM0 as input and returns a context-enriched matrix representation INLINEFORM1 , in which each INLINEFORM2 contains a measure of information about its context, and interactions with its neighbors. In particular, we chose to do this by feeding INLINEFORM3 to a BiLSTM BIBREF7 , BIBREF8 .
Informally, we can think of LSTM BIBREF10 as a function INLINEFORM0 that takes a matrix INLINEFORM1 as input and returns a context-enriched matrix representation INLINEFORM2 , where each INLINEFORM3 encodes information about the previous elements INLINEFORM4 .
A BiLSTM is simply composed of 2 LSTM, one that reads the input from left to right (forward), and another that does so from right to left (backward). The output of the forward and backward LSTM are INLINEFORM0 and INLINEFORM1 respectively. In the backward case the LSTM reads INLINEFORM2 first and INLINEFORM3 last, therefore INLINEFORM4 will encode the context from INLINEFORM5 .
The aggregation function takes the context-enriched matrix representation of word INLINEFORM0 for both directions, INLINEFORM1 and INLINEFORM2 , and returns a single vector INLINEFORM3 . To do so we followed BIBREF11 , and defined the character-level representation INLINEFORM4 of word INLINEFORM5 as the linear combination of the forward and backward last hidden states returned by the context function: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are trainable parameters, and INLINEFORM2 represents the concatenation operation between two vectors.
Combining Character and Word-level Representations
We tested three different methods for combining INLINEFORM0 with INLINEFORM1 : simple concatenation, a learned scalar gate BIBREF11 , and a learned vector gate (also referred to as feature-wise sigmoidal gate). Additionally, we compared these methods to two baselines: using pre-trained word vectors only, and using character-only features for representing words. See fig:methods for a visual description of the proposed methods.
word-only (w) considers only INLINEFORM0 and ignores INLINEFORM1 : DISPLAYFORM0
char-only (c) considers only INLINEFORM0 and ignores INLINEFORM1 : DISPLAYFORM0
concat (cat) concatenates both word and character-level representations: DISPLAYFORM0
scalar gate (sg) implements the scalar gating mechanism described by BIBREF11 : DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are trainable parameters, INLINEFORM2 , and INLINEFORM3 is the sigmoid function.
vector gate (vg): DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are trainable parameters, INLINEFORM2 , INLINEFORM3 is the element-wise sigmoid function, INLINEFORM4 is the element-wise product for vectors, and INLINEFORM5 is a vector of ones.
The vector gate is inspired by BIBREF11 and BIBREF12 , but is different to the former in that the gating mechanism acts upon each dimension of the word and character-level vectors, and different to the latter in that it does not rely on external sources of information for calculating the gating mechanism.
Finally, note that word only and char only are special cases of both gating mechanisms: INLINEFORM0 (scalar gate) and INLINEFORM1 (vector gate) correspond to word only; INLINEFORM2 and INLINEFORM3 correspond to char only.
Obtaining Sentence Representations
To enable sentence-level classification we need to obtain a sentence representation from the word vectors INLINEFORM0 . We achieved this by using a BiLSTM with max pooling, which was shown to be a good universal sentence encoding mechanism BIBREF13 .
Let INLINEFORM0 , be an input sentence and INLINEFORM1 its matrix representation, where each INLINEFORM2 was obtained by one of the methods described in subsec:methods. INLINEFORM3 is the context-enriched matrix representation of INLINEFORM4 obtained by feeding INLINEFORM5 to a BiLSTM of output dimension INLINEFORM6 . Lastly, INLINEFORM11 is the final sentence representation of INLINEFORM12 obtained by max-pooling INLINEFORM13 along the sequence dimension.
Finally, we initialized the word representations INLINEFORM0 using GloVe embeddings BIBREF14 , and fine-tuned them during training. Refer to app:hyperparams for details on the other hyperparameters we used.
Experimental Setup
We trained our models for solving the Natural Language Inference (NLI) task in two datasets, SNLI BIBREF15 and MultiNLI BIBREF16 , and validated them in each corresponding development set (including the matched and mismatched development sets of MultiNLI).
For each dataset-method combination we trained 7 models initialized with different random seeds, and saved each when it reached its best validation accuracy. We then evaluated the quality of each trained model's word representations INLINEFORM0 in 10 word similarity tasks, using the system created by BIBREF17 .
Finally, we fed these obtained word vectors to a BiLSTM with max-pooling and evaluated the final sentence representations in 11 downstream transfer tasks BIBREF13 , BIBREF18 .
Datasets
Word-level Semantic Similarity A desirable property of vector representations of words is that semantically similar words should have similar vector representations. Assessing whether a set of word representations possesses this quality is referred to as the semantic similarity task. This is the most widely-used evaluation method for evaluating word representations, despite its shortcomings BIBREF20 .
This task consists of comparing the similarity between word vectors measured by a distance metric (usually cosine distance), with a similarity score obtained from human judgements. High correlation between these similarities is an indicator of good performance.
A problem with this formulation though, is that the definition of “similarity” often confounds the meaning of both similarity and relatedness. For example, cup and tea are related but dissimilar words, and this type of distinction is not always clear BIBREF21 , BIBREF22 .
To face the previous problem, we tested our methods in a wide variety of datasets, including some that explicitly model relatedness (WS353R), some that explicitly consider similarity (WS353S, SimLex999, SimVerb3500), and some where the distinction is not clear (MEN, MTurk287, MTurk771, RG, WS353). We also included the RareWords (RW) dataset for evaluating the quality of rare word representations. See appendix:datasets for a more complete description of the datasets we used.
Sentence-level Evaluation Tasks Unlike word-level representations, there is no consensus on the desirable properties sentence representations should have. In response to this, BIBREF13 created SentEval, a sentence representation evaluation benchmark designed for assessing how well sentence representations perform in various downstream tasks BIBREF23 .
Some of the datasets included in SentEval correspond to sentiment classification (CR, MPQA, MR, SST2, and SST5), subjectivity classification (SUBJ), question-type classification (TREC), recognizing textual entailment (SICK E), estimating semantic relatedness (SICK R), and measuring textual semantic similarity (STS16, STSB). The datasets are described by BIBREF13 , and we provide pointers to their original sources in the appendix table:sentence-eval-datasets.
To evaluate these sentence representations SentEval trained a linear model on top of them, and evaluated their performance in the validation sets accompanying each dataset. The only exception was the STS16 task, in which our representations were evaluated directly.
Word Similarity
table:wordlevelresults shows the quality of word representations in terms of the correlation between word similarity scores obtained by the proposed models and word similarity scores defined by humans.
First, we can see that for each task, character only models had significantly worse performance than every other model trained on the same dataset. The most likely explanation for this is that these models are the only ones that need to learn word representations from scratch, since they have no access to the global semantic knowledge encoded by the GloVe embeddings.
Further, bold results show the overall trend that vector gates outperformed the other methods regardless of training dataset. This implies that learning how to combine character and word-level representations at the dimension level produces word vector representations that capture a notion of word similarity and relatedness that is closer to that of humans.
Additionally, results from the MNLI row in general, and underlined results in particular, show that training on MultiNLI produces word representations better at capturing word similarity. This is probably due to MultiNLI data being richer than that of SNLI. Indeed, MultiNLI data was gathered from various sources (novels, reports, letters, and telephone conversations, among others), rather than the single image captions dataset from which SNLI was created.
Exceptions to the previous rule are models evaluated in MEN and RW. The former case can be explained by the MEN dataset containing only words that appear as image labels in the ESP-Game and MIRFLICKR-1M image datasets BIBREF24 , and therefore having data that is more closely distributed to SNLI than to MultiNLI.
More notably, in the RareWords dataset BIBREF25 , the word only, concat, and scalar gate methods performed equally, despite having been trained in different datasets ( INLINEFORM0 ), and the char only method performed significantly worse when trained in MultiNLI. The vector gate, however, performed significantly better than its counterpart trained in SNLI. These facts provide evidence that this method is capable of capturing linguistic phenomena that the other methods are unable to model.
table:word-similarity-dataset lists the word-similarity datasets and their corresponding reference. As mentioned in subsec:datasets, all the word-similarity datasets contain pairs of words annotated with similarity or relatedness scores, although this difference is not always explicit. Below we provide some details for each.
MEN contains 3000 annotated word pairs with integer scores ranging from 0 to 50. Words correspond to image labels appearing in the ESP-Game and MIRFLICKR-1M image datasets.
MTurk287 contains 287 annotated pairs with scores ranging from 1.0 to 5.0. It was created from words appearing in both DBpedia and in news articles from The New York Times.
MTurk771 contains 771 annotated pairs with scores ranging from 1.0 to 5.0, with words having synonymy, holonymy or meronymy relationships sampled from WordNet BIBREF56 .
RG contains 65 annotated pairs with scores ranging from 0.0 to 4.0 representing “similarity of meaning”.
RW contains 2034 pairs of words annotated with similarity scores in a scale from 0 to 10. The words included in this dataset were obtained from Wikipedia based on their frequency, and later filtered depending on their WordNet synsets, including synonymy, hyperonymy, hyponymy, holonymy and meronymy. This dataset was created with the purpose of testing how well models can represent rare and complex words.
SimLex999 contains 999 word pairs annotated with similarity scores ranging from 0 to 10. In this case the authors explicitly considered similarity and not relatedness, addressing the shortcomings of datasets that do not, such as MEN and WS353. Words include nouns, adjectives and verbs.
SimVerb3500 contains 3500 verb pairs annotated with similarity scores ranging from 0 to 10. Verbs were obtained from the USF free association database BIBREF66 , and VerbNet BIBREF63 . This dataset was created to address the lack of representativity of verbs in SimLex999, and the fact that, at the time of creation, the best performing models had already surpassed inter-annotator agreement in verb similarity evaluation resources. Like SimLex999, this dataset also explicitly considers similarity as opposed to relatedness.
WS353 contains 353 word pairs annotated with similarity scores from 0 to 10.
WS353R is a subset of WS353 containing 252 word pairs annotated with relatedness scores. This dataset was created by asking humans to classify each WS353 word pair into one of the following classes: synonyms, antonyms, identical, hyperonym-hyponym, hyponym-hyperonym, holonym-meronym, meronym-holonym, and none-of-the-above. These annotations were later used to group the pairs into: similar pairs (synonyms, antonyms, identical, hyperonym-hyponym, and hyponym-hyperonym), related pairs (holonym-meronym, meronym-holonym, and none-of-the-above with a human similarity score greater than 5), and unrelated pairs (classified as none-of-the-above with a similarity score less than or equal to 5). This dataset is composed by the union of related and unrelated pairs.
WS353S is another subset of WS353 containing 203 word pairs annotated with similarity scores. This dataset is composed by the union of similar and unrelated pairs, as described previously.
Word Frequencies and Gating Values
fig:gatingviz shows that for more common words the vector gate mechanism tends to favor only a few dimensions while keeping a low average gating value across dimensions. On the other hand, values are greater and more homogeneous across dimensions in rarer words. Further, fig:freqvsgatevalue shows this mechanism assigns, on average, a greater gating value to less frequent words, confirming the findings by BIBREF11 , and BIBREF12 .
In other words, the less frequent the word, the more this mechanism allows the character-level representation to influence the final word representation, as shown by eq:vg. A possible interpretation of this result is that exploiting character information becomes increasingly necessary as word-level representations' quality decrease.
Another observable trend in both figures is that gating values tend to be low on average. Indeed, it is possible to see in fig:freqvsgatevalue that the average gating values range from INLINEFORM0 to INLINEFORM1 . This result corroborates the findings by BIBREF11 , stating that setting INLINEFORM2 in eq:scalar-gate, was better than setting it to higher values.
In summary, the gating mechanisms learn how to compensate the lack of expressivity of underrepresented words by selectively combining their representations with those of characters.
Sentence-level Evaluation
table:sentlevelresults shows the impact that different methods for combining character and word-level word representations have in the quality of the sentence representations produced by our models.
We can observe the same trend mentioned in subsec:word-similarity-eval, and highlighted by the difference between bold values, that models trained in MultiNLI performed better than those trained in SNLI at a statistically significant level, confirming the findings of BIBREF13 . In other words, training sentence encoders on MultiNLI yields more general sentence representations than doing so on SNLI.
The two exceptions to the previous trend, SICKE and SICKR, benefited more from models trained on SNLI. We hypothesize this is again due to both SNLI and SICK BIBREF26 having similar data distributions.
Additionally, there was no method that significantly outperformed the word only baseline in classification tasks. This means that the added expressivity offered by explicitly modeling characters, be it through concatenation or gating, was not significantly better than simply fine-tuning the pre-trained GloVe embeddings for this type of task. We hypothesize this is due to the conflation of two effects. First, the fact that morphological processes might not encode important information for solving these tasks; and second, that SNLI and MultiNLI belong to domains that are too dissimilar to the domains in which the sentence representations are being tested.
On the other hand, the vector gate significantly outperformed every other method in the STSB task when trained in both datasets, and in the STS16 task when trained in SNLI. This again hints at this method being capable of modeling phenomena at the word level, resulting in improved semantic representations at the sentence level.
Relationship Between Word- and Sentence-level Evaluation Tasks
It is clear that the better performance the vector gate had in word similarity tasks did not translate into overall better performance in downstream tasks. This confirms previous findings indicating that intrinsic word evaluation metrics are not good predictors of downstream performance BIBREF29 , BIBREF30 , BIBREF20 , BIBREF31 .
subfig:mnli-correlations shows that the word representations created by the vector gate trained in MultiNLI had positively-correlated results within several word-similarity tasks. This hints at the generality of the word representations created by this method when modeling similarity and relatedness.
However, the same cannot be said about sentence-level evaluation performance; there is no clear correlation between word similarity tasks and sentence-evaluation tasks. This is clearly illustrated by performance in the STSBenchmark, the only in which the vector gate was significantly superior, not being correlated with performance in any word-similarity dataset. This can be interpreted simply as word-level representations capturing word-similarity not being a sufficient condition for good performance in sentence-level tasks.
In general, fig:correlations shows that there are no general correlation effects spanning both training datasets and combination mechanisms. For example, subfig:snli-correlations shows that, for both word-only and concat models trained in SNLI, performance in word similarity tasks correlates positively with performance in most sentence evaluation tasks, however, this does not happen as clearly for the same models trained in MultiNLI (subfig:mnli-correlations).
Gating Mechanisms for Combining Characters and Word Representations
To the best of our knowledge, there are only two recent works that specifically study how to combine word and subword-level vector representations.
BIBREF11 propose to use a trainable scalar gating mechanism capable of learning a weighting scheme for combining character-level and word-level representations. They compared their proposed method to manually weighting both levels; using characters only; words only; or their concatenation. They found that in some datasets a specific manual weighting scheme performed better, while in others the learned scalar gate did.
BIBREF12 further expand the gating concept by making the mechanism work at a finer-grained level, learning how to weight each vector's dimensions independently, conditioned on external word-level features such as part-of-speech and named-entity tags. Similarly, they compared their proposed mechanism to using words only, characters only, and a concatenation of both, with and without external features. They found that their vector gate performed better than the other methods in all the reported tasks, and beat the state of the art in two reading comprehension tasks.
Both works showed that the gating mechanisms assigned greater importance to character-level representations in rare words, and to word-level representations in common ones, reaffirming the previous findings that subword structures in general, and characters in particular, are beneficial for modeling uncommon words.
Sentence Representation Learning
The problem of representing sentences as fixed-length vectors has been widely studied.
BIBREF32 suggested a self-adaptive hierarchical model that gradually composes words into intermediate phrase representations, and adaptively selects specific hierarchical levels for specific tasks. BIBREF33 proposed an encoder-decoder model trained by attempting to reconstruct the surrounding sentences of an encoded passage, in a fashion similar to Skip-gram BIBREF34 . BIBREF35 overcame the previous model's need for ordered training sentences by using autoencoders for creating the sentence representations. BIBREF36 implemented a model simpler and faster to train than the previous two, while having competitive performance. Similar to BIBREF33 , BIBREF37 suggested predicting future sentences with a hierarchical CNN-LSTM encoder.
BIBREF13 trained several sentence encoding architectures on a combination of the SNLI and MultiNLI datasets, and showed that a BiLSTM with max-pooling was the best at producing highly transferable sentence representations. More recently, BIBREF18 empirically showed that sentence representations created in a multi-task setting BIBREF38 , performed increasingly better the more tasks they were trained in. BIBREF39 proposed using an autoencoder that relies on multi-head self-attention over the concatenation of the max and mean pooled encoder outputs for producing sentence representations. Finally, BIBREF40 show that modern sentence embedding methods are not vastly superior to random methods.
The works mentioned so far usually evaluate the quality of the produced sentence representations in sentence-level downstream tasks. Common benchmarks grouping these kind of tasks include SentEval BIBREF23 , and GLUE BIBREF41 . Another trend, however, is to probe sentence representations to understand what linguistic phenomena they encode BIBREF42 , BIBREF43 , BIBREF44 , BIBREF45 , BIBREF46 .
General Feature-wise Transformations
BIBREF47 provide a review on feature-wise transformation methods, of which the mechanisms presented in this paper form a part of. In a few words, the INLINEFORM0 parameter, in both scalar gate and vector gate mechanisms, can be understood as a scaling parameter limited to the INLINEFORM1 range and conditioned on word representations, whereas adding the scaled INLINEFORM2 and INLINEFORM3 representations can be seen as biasing word representations conditioned on character representations.
The previous review extends the work by BIBREF48 , which describes the Feature-wise Linear Modulation (FiLM) framework as a generalization of Conditional Normalization methods, and apply it in visual reasoning tasks. Some of the reported findings are that, in general, scaling has greater impact than biasing, and that in a setting similar to the scalar gate, limiting the scaling parameter to INLINEFORM0 hurt performance. Future decisions involving the design of mechanisms for combining character and word-level representations should be informed by these insights.
Conclusions
We presented an empirical study showing the effect that different ways of combining character and word representations has in word-level and sentence-level evaluation tasks.
We showed that a vector gate performed consistently better across a variety of word similarity and relatedness tasks. Additionally, despite showing inconsistent results in sentence evaluation tasks, it performed significantly better than the other methods in semantic similarity tasks.
We further showed through this mechanism, that learning character-level representations is always beneficial, and becomes increasingly so with less common words.
In the future it would be interesting to study how the choice of mechanism for combining subword and word representations affects the more recent language-model-based pretraining methods such as ELMo BIBREF49 , GPT BIBREF50 , BIBREF51 and BERT BIBREF52 .
Acknowledgements
Thanks to Edison Marrese-Taylor and Pablo Loyola for their feedback on early versions of this manuscript. We also gratefully acknowledge the support of the NVIDIA Corporation with the donation of one of the GPUs used for this research. Jorge A. Balazs is partially supported by the Japanese Government MEXT Scholarship.
Hyperparameters
We only considered words that appear at least twice, for each dataset. Those that appeared only once were considered UNK. We used the Treebank Word Tokenizer as implemented in NLTK for tokenizing the training and development datasets.
In the same fashion as conneau2017supervised, we used a batch size of 64, an SGD optmizer with an initial learning rate of INLINEFORM0 , and at each epoch divided the learning rate by 5 if the validation accuracy decreased. We also used gradient clipping when gradients where INLINEFORM1 .
We defined character vector representations as 50-dimensional vectors randomly initialized by sampling from the uniform distribution in the INLINEFORM0 range.
The output dimension of the character-level BiLSTM was 300 per direction, and remained of such size after combining forward and backward representations as depicted in eq. EQREF9 .
Word vector representations where initialized from the 300-dimensional GloVe vectors BIBREF14 , trained in 840B tokens from the Common Crawl, and finetuned during training. Words not present in the GloVe vocabulary where randomly initialized by sampling from the uniform distribution in the INLINEFORM0 range.
The input size of the word-level LSTM was 300 for every method except concat in which it was 600, and its output was always 2048 per direction, resulting in a 4096-dimensional sentence representation.
Sentence Evaluation Datasets
table:sentence-eval-datasets lists the sentence-level evaluation datasets used in this paper. The provided URLs correspond to the original sources, and not necessarily to the URLs where SentEval got the data from.
The version of the CR, MPQA, MR, and SUBJ datasets used in this paper were the ones preprocessed by BIBREF75 . Both SST2 and SST5 correspond to preprocessed versions of the SST dataset by BIBREF74 . SST2 corresponds to a subset of SST used by BIBREF54 containing flat representations of sentences annotated with binary sentiment labels, and SST5 to another subset annotated with more fine-grained sentiment labels (very negative, negative, neutral, positive, very positive). | BIBREF13 , BIBREF18 |
9f89bff89cea722debc991363f0826de945bc582 | 9f89bff89cea722debc991363f0826de945bc582_0 | Q: Which similarity datasets do they use?
Text: Introduction
Incorporating sub-word structures like substrings, morphemes and characters to the creation of word representations significantly increases their quality as reflected both by intrinsic metrics and performance in a wide range of downstream tasks BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 .
The reason for this improvement is related to sub-word structures containing information that is usually ignored by standard word-level models. Indeed, when representing words as vectors extracted from a lookup table, semantically related words resulting from inflectional processes such as surf, surfing, and surfed, are treated as being independent from one another. Further, word-level embeddings do not account for derivational processes resulting in syntactically-similar words with different meanings such as break, breakable, and unbreakable. This causes derived words, which are usually less frequent, to have lower-quality (or no) vector representations.
Previous works have successfully combined character-level and word-level word representations, obtaining overall better results than using only word-level representations. For example BIBREF1 achieved state-of-the-art results in a machine translation task by representing unknown words as a composition of their characters. BIBREF4 created word representations by adding the vector representations of the words' surface forms and their morphemes ( INLINEFORM0 ), obtaining significant improvements on intrinsic evaluation tasks, word similarity and machine translation. BIBREF5 concatenated character-level and word-level representations for creating word representations, and then used them as input to their models for obtaining state-of-the-art results in Named Entity Recognition on several languages.
What these works have in common is that the models they describe first learn how to represent subword information, at character BIBREF1 , morpheme BIBREF4 , or substring BIBREF0 levels, and then combine these learned representations at the word level. The incorporation of information at a finer-grained hierarchy results in higher-quality modeling of rare words, morphological processes, and semantics BIBREF6 .
There is no consensus, however, on which combination method works better in which case, or how the choice of a combination method affects downstream performance, either measured intrinsically at the word level, or extrinsically at the sentence level.
In this paper we aim to provide some intuitions about how the choice of mechanism for combining character-level with word-level representations influences the quality of the final word representations, and the subsequent effect these have in the performance of downstream tasks. Our contributions are as follows:
Background
We are interested in studying different ways of combining word representations, obtained from different hierarchies, into a single word representation. Specifically, we want to study how combining word representations (1) taken directly from a word embedding lookup table, and (2) obtained from a function over the characters composing them, affects the quality of the final word representations.
Let INLINEFORM0 be a set, or vocabulary, of words with INLINEFORM1 elements, and INLINEFORM2 a vocabulary of characters with INLINEFORM3 elements. Further, let INLINEFORM4 be a sequence of words, and INLINEFORM5 be the sequence of characters composing INLINEFORM6 . Each token INLINEFORM7 can be represented as a vector INLINEFORM8 extracted directly from an embedding lookup table INLINEFORM9 , pre-trained or otherwise, and as a vector INLINEFORM10 built from the characters that compose it; in other words, INLINEFORM11 , where INLINEFORM12 is a function that maps a sequence of characters to a vector.
The methods for combining word and character-level representations we study, are of the form INLINEFORM0 where INLINEFORM1 is the final word representation.
Mapping Characters to Character-level Word Representations
The function INLINEFORM0 is composed of an embedding layer, an optional context function, and an aggregation function.
The embedding layer transforms each character INLINEFORM0 into a vector INLINEFORM1 of dimension INLINEFORM2 , by directly taking it from a trainable embedding lookup table INLINEFORM3 . We define the matrix representation of word INLINEFORM4 as INLINEFORM5 .
The context function takes INLINEFORM0 as input and returns a context-enriched matrix representation INLINEFORM1 , in which each INLINEFORM2 contains a measure of information about its context, and interactions with its neighbors. In particular, we chose to do this by feeding INLINEFORM3 to a BiLSTM BIBREF7 , BIBREF8 .
Informally, we can think of LSTM BIBREF10 as a function INLINEFORM0 that takes a matrix INLINEFORM1 as input and returns a context-enriched matrix representation INLINEFORM2 , where each INLINEFORM3 encodes information about the previous elements INLINEFORM4 .
A BiLSTM is simply composed of 2 LSTM, one that reads the input from left to right (forward), and another that does so from right to left (backward). The output of the forward and backward LSTM are INLINEFORM0 and INLINEFORM1 respectively. In the backward case the LSTM reads INLINEFORM2 first and INLINEFORM3 last, therefore INLINEFORM4 will encode the context from INLINEFORM5 .
The aggregation function takes the context-enriched matrix representation of word INLINEFORM0 for both directions, INLINEFORM1 and INLINEFORM2 , and returns a single vector INLINEFORM3 . To do so we followed BIBREF11 , and defined the character-level representation INLINEFORM4 of word INLINEFORM5 as the linear combination of the forward and backward last hidden states returned by the context function: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are trainable parameters, and INLINEFORM2 represents the concatenation operation between two vectors.
Combining Character and Word-level Representations
We tested three different methods for combining INLINEFORM0 with INLINEFORM1 : simple concatenation, a learned scalar gate BIBREF11 , and a learned vector gate (also referred to as feature-wise sigmoidal gate). Additionally, we compared these methods to two baselines: using pre-trained word vectors only, and using character-only features for representing words. See fig:methods for a visual description of the proposed methods.
word-only (w) considers only INLINEFORM0 and ignores INLINEFORM1 : DISPLAYFORM0
char-only (c) considers only INLINEFORM0 and ignores INLINEFORM1 : DISPLAYFORM0
concat (cat) concatenates both word and character-level representations: DISPLAYFORM0
scalar gate (sg) implements the scalar gating mechanism described by BIBREF11 : DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are trainable parameters, INLINEFORM2 , and INLINEFORM3 is the sigmoid function.
vector gate (vg): DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are trainable parameters, INLINEFORM2 , INLINEFORM3 is the element-wise sigmoid function, INLINEFORM4 is the element-wise product for vectors, and INLINEFORM5 is a vector of ones.
The vector gate is inspired by BIBREF11 and BIBREF12 , but is different to the former in that the gating mechanism acts upon each dimension of the word and character-level vectors, and different to the latter in that it does not rely on external sources of information for calculating the gating mechanism.
Finally, note that word only and char only are special cases of both gating mechanisms: INLINEFORM0 (scalar gate) and INLINEFORM1 (vector gate) correspond to word only; INLINEFORM2 and INLINEFORM3 correspond to char only.
Obtaining Sentence Representations
To enable sentence-level classification we need to obtain a sentence representation from the word vectors INLINEFORM0 . We achieved this by using a BiLSTM with max pooling, which was shown to be a good universal sentence encoding mechanism BIBREF13 .
Let INLINEFORM0 , be an input sentence and INLINEFORM1 its matrix representation, where each INLINEFORM2 was obtained by one of the methods described in subsec:methods. INLINEFORM3 is the context-enriched matrix representation of INLINEFORM4 obtained by feeding INLINEFORM5 to a BiLSTM of output dimension INLINEFORM6 . Lastly, INLINEFORM11 is the final sentence representation of INLINEFORM12 obtained by max-pooling INLINEFORM13 along the sequence dimension.
Finally, we initialized the word representations INLINEFORM0 using GloVe embeddings BIBREF14 , and fine-tuned them during training. Refer to app:hyperparams for details on the other hyperparameters we used.
Experimental Setup
We trained our models for solving the Natural Language Inference (NLI) task in two datasets, SNLI BIBREF15 and MultiNLI BIBREF16 , and validated them in each corresponding development set (including the matched and mismatched development sets of MultiNLI).
For each dataset-method combination we trained 7 models initialized with different random seeds, and saved each when it reached its best validation accuracy. We then evaluated the quality of each trained model's word representations INLINEFORM0 in 10 word similarity tasks, using the system created by BIBREF17 .
Finally, we fed these obtained word vectors to a BiLSTM with max-pooling and evaluated the final sentence representations in 11 downstream transfer tasks BIBREF13 , BIBREF18 .
Datasets
Word-level Semantic Similarity A desirable property of vector representations of words is that semantically similar words should have similar vector representations. Assessing whether a set of word representations possesses this quality is referred to as the semantic similarity task. This is the most widely-used evaluation method for evaluating word representations, despite its shortcomings BIBREF20 .
This task consists of comparing the similarity between word vectors measured by a distance metric (usually cosine distance), with a similarity score obtained from human judgements. High correlation between these similarities is an indicator of good performance.
A problem with this formulation though, is that the definition of “similarity” often confounds the meaning of both similarity and relatedness. For example, cup and tea are related but dissimilar words, and this type of distinction is not always clear BIBREF21 , BIBREF22 .
To face the previous problem, we tested our methods in a wide variety of datasets, including some that explicitly model relatedness (WS353R), some that explicitly consider similarity (WS353S, SimLex999, SimVerb3500), and some where the distinction is not clear (MEN, MTurk287, MTurk771, RG, WS353). We also included the RareWords (RW) dataset for evaluating the quality of rare word representations. See appendix:datasets for a more complete description of the datasets we used.
Sentence-level Evaluation Tasks Unlike word-level representations, there is no consensus on the desirable properties sentence representations should have. In response to this, BIBREF13 created SentEval, a sentence representation evaluation benchmark designed for assessing how well sentence representations perform in various downstream tasks BIBREF23 .
Some of the datasets included in SentEval correspond to sentiment classification (CR, MPQA, MR, SST2, and SST5), subjectivity classification (SUBJ), question-type classification (TREC), recognizing textual entailment (SICK E), estimating semantic relatedness (SICK R), and measuring textual semantic similarity (STS16, STSB). The datasets are described by BIBREF13 , and we provide pointers to their original sources in the appendix table:sentence-eval-datasets.
To evaluate these sentence representations SentEval trained a linear model on top of them, and evaluated their performance in the validation sets accompanying each dataset. The only exception was the STS16 task, in which our representations were evaluated directly.
Word Similarity
table:wordlevelresults shows the quality of word representations in terms of the correlation between word similarity scores obtained by the proposed models and word similarity scores defined by humans.
First, we can see that for each task, character only models had significantly worse performance than every other model trained on the same dataset. The most likely explanation for this is that these models are the only ones that need to learn word representations from scratch, since they have no access to the global semantic knowledge encoded by the GloVe embeddings.
Further, bold results show the overall trend that vector gates outperformed the other methods regardless of training dataset. This implies that learning how to combine character and word-level representations at the dimension level produces word vector representations that capture a notion of word similarity and relatedness that is closer to that of humans.
Additionally, results from the MNLI row in general, and underlined results in particular, show that training on MultiNLI produces word representations better at capturing word similarity. This is probably due to MultiNLI data being richer than that of SNLI. Indeed, MultiNLI data was gathered from various sources (novels, reports, letters, and telephone conversations, among others), rather than the single image captions dataset from which SNLI was created.
Exceptions to the previous rule are models evaluated in MEN and RW. The former case can be explained by the MEN dataset containing only words that appear as image labels in the ESP-Game and MIRFLICKR-1M image datasets BIBREF24 , and therefore having data that is more closely distributed to SNLI than to MultiNLI.
More notably, in the RareWords dataset BIBREF25 , the word only, concat, and scalar gate methods performed equally, despite having been trained in different datasets ( INLINEFORM0 ), and the char only method performed significantly worse when trained in MultiNLI. The vector gate, however, performed significantly better than its counterpart trained in SNLI. These facts provide evidence that this method is capable of capturing linguistic phenomena that the other methods are unable to model.
table:word-similarity-dataset lists the word-similarity datasets and their corresponding reference. As mentioned in subsec:datasets, all the word-similarity datasets contain pairs of words annotated with similarity or relatedness scores, although this difference is not always explicit. Below we provide some details for each.
MEN contains 3000 annotated word pairs with integer scores ranging from 0 to 50. Words correspond to image labels appearing in the ESP-Game and MIRFLICKR-1M image datasets.
MTurk287 contains 287 annotated pairs with scores ranging from 1.0 to 5.0. It was created from words appearing in both DBpedia and in news articles from The New York Times.
MTurk771 contains 771 annotated pairs with scores ranging from 1.0 to 5.0, with words having synonymy, holonymy or meronymy relationships sampled from WordNet BIBREF56 .
RG contains 65 annotated pairs with scores ranging from 0.0 to 4.0 representing “similarity of meaning”.
RW contains 2034 pairs of words annotated with similarity scores in a scale from 0 to 10. The words included in this dataset were obtained from Wikipedia based on their frequency, and later filtered depending on their WordNet synsets, including synonymy, hyperonymy, hyponymy, holonymy and meronymy. This dataset was created with the purpose of testing how well models can represent rare and complex words.
SimLex999 contains 999 word pairs annotated with similarity scores ranging from 0 to 10. In this case the authors explicitly considered similarity and not relatedness, addressing the shortcomings of datasets that do not, such as MEN and WS353. Words include nouns, adjectives and verbs.
SimVerb3500 contains 3500 verb pairs annotated with similarity scores ranging from 0 to 10. Verbs were obtained from the USF free association database BIBREF66 , and VerbNet BIBREF63 . This dataset was created to address the lack of representativity of verbs in SimLex999, and the fact that, at the time of creation, the best performing models had already surpassed inter-annotator agreement in verb similarity evaluation resources. Like SimLex999, this dataset also explicitly considers similarity as opposed to relatedness.
WS353 contains 353 word pairs annotated with similarity scores from 0 to 10.
WS353R is a subset of WS353 containing 252 word pairs annotated with relatedness scores. This dataset was created by asking humans to classify each WS353 word pair into one of the following classes: synonyms, antonyms, identical, hyperonym-hyponym, hyponym-hyperonym, holonym-meronym, meronym-holonym, and none-of-the-above. These annotations were later used to group the pairs into: similar pairs (synonyms, antonyms, identical, hyperonym-hyponym, and hyponym-hyperonym), related pairs (holonym-meronym, meronym-holonym, and none-of-the-above with a human similarity score greater than 5), and unrelated pairs (classified as none-of-the-above with a similarity score less than or equal to 5). This dataset is composed by the union of related and unrelated pairs.
WS353S is another subset of WS353 containing 203 word pairs annotated with similarity scores. This dataset is composed by the union of similar and unrelated pairs, as described previously.
Word Frequencies and Gating Values
fig:gatingviz shows that for more common words the vector gate mechanism tends to favor only a few dimensions while keeping a low average gating value across dimensions. On the other hand, values are greater and more homogeneous across dimensions in rarer words. Further, fig:freqvsgatevalue shows this mechanism assigns, on average, a greater gating value to less frequent words, confirming the findings by BIBREF11 , and BIBREF12 .
In other words, the less frequent the word, the more this mechanism allows the character-level representation to influence the final word representation, as shown by eq:vg. A possible interpretation of this result is that exploiting character information becomes increasingly necessary as word-level representations' quality decrease.
Another observable trend in both figures is that gating values tend to be low on average. Indeed, it is possible to see in fig:freqvsgatevalue that the average gating values range from INLINEFORM0 to INLINEFORM1 . This result corroborates the findings by BIBREF11 , stating that setting INLINEFORM2 in eq:scalar-gate, was better than setting it to higher values.
In summary, the gating mechanisms learn how to compensate the lack of expressivity of underrepresented words by selectively combining their representations with those of characters.
Sentence-level Evaluation
table:sentlevelresults shows the impact that different methods for combining character and word-level word representations have in the quality of the sentence representations produced by our models.
We can observe the same trend mentioned in subsec:word-similarity-eval, and highlighted by the difference between bold values, that models trained in MultiNLI performed better than those trained in SNLI at a statistically significant level, confirming the findings of BIBREF13 . In other words, training sentence encoders on MultiNLI yields more general sentence representations than doing so on SNLI.
The two exceptions to the previous trend, SICKE and SICKR, benefited more from models trained on SNLI. We hypothesize this is again due to both SNLI and SICK BIBREF26 having similar data distributions.
Additionally, there was no method that significantly outperformed the word only baseline in classification tasks. This means that the added expressivity offered by explicitly modeling characters, be it through concatenation or gating, was not significantly better than simply fine-tuning the pre-trained GloVe embeddings for this type of task. We hypothesize this is due to the conflation of two effects. First, the fact that morphological processes might not encode important information for solving these tasks; and second, that SNLI and MultiNLI belong to domains that are too dissimilar to the domains in which the sentence representations are being tested.
On the other hand, the vector gate significantly outperformed every other method in the STSB task when trained in both datasets, and in the STS16 task when trained in SNLI. This again hints at this method being capable of modeling phenomena at the word level, resulting in improved semantic representations at the sentence level.
Relationship Between Word- and Sentence-level Evaluation Tasks
It is clear that the better performance the vector gate had in word similarity tasks did not translate into overall better performance in downstream tasks. This confirms previous findings indicating that intrinsic word evaluation metrics are not good predictors of downstream performance BIBREF29 , BIBREF30 , BIBREF20 , BIBREF31 .
subfig:mnli-correlations shows that the word representations created by the vector gate trained in MultiNLI had positively-correlated results within several word-similarity tasks. This hints at the generality of the word representations created by this method when modeling similarity and relatedness.
However, the same cannot be said about sentence-level evaluation performance; there is no clear correlation between word similarity tasks and sentence-evaluation tasks. This is clearly illustrated by performance in the STSBenchmark, the only in which the vector gate was significantly superior, not being correlated with performance in any word-similarity dataset. This can be interpreted simply as word-level representations capturing word-similarity not being a sufficient condition for good performance in sentence-level tasks.
In general, fig:correlations shows that there are no general correlation effects spanning both training datasets and combination mechanisms. For example, subfig:snli-correlations shows that, for both word-only and concat models trained in SNLI, performance in word similarity tasks correlates positively with performance in most sentence evaluation tasks, however, this does not happen as clearly for the same models trained in MultiNLI (subfig:mnli-correlations).
Gating Mechanisms for Combining Characters and Word Representations
To the best of our knowledge, there are only two recent works that specifically study how to combine word and subword-level vector representations.
BIBREF11 propose to use a trainable scalar gating mechanism capable of learning a weighting scheme for combining character-level and word-level representations. They compared their proposed method to manually weighting both levels; using characters only; words only; or their concatenation. They found that in some datasets a specific manual weighting scheme performed better, while in others the learned scalar gate did.
BIBREF12 further expand the gating concept by making the mechanism work at a finer-grained level, learning how to weight each vector's dimensions independently, conditioned on external word-level features such as part-of-speech and named-entity tags. Similarly, they compared their proposed mechanism to using words only, characters only, and a concatenation of both, with and without external features. They found that their vector gate performed better than the other methods in all the reported tasks, and beat the state of the art in two reading comprehension tasks.
Both works showed that the gating mechanisms assigned greater importance to character-level representations in rare words, and to word-level representations in common ones, reaffirming the previous findings that subword structures in general, and characters in particular, are beneficial for modeling uncommon words.
Sentence Representation Learning
The problem of representing sentences as fixed-length vectors has been widely studied.
BIBREF32 suggested a self-adaptive hierarchical model that gradually composes words into intermediate phrase representations, and adaptively selects specific hierarchical levels for specific tasks. BIBREF33 proposed an encoder-decoder model trained by attempting to reconstruct the surrounding sentences of an encoded passage, in a fashion similar to Skip-gram BIBREF34 . BIBREF35 overcame the previous model's need for ordered training sentences by using autoencoders for creating the sentence representations. BIBREF36 implemented a model simpler and faster to train than the previous two, while having competitive performance. Similar to BIBREF33 , BIBREF37 suggested predicting future sentences with a hierarchical CNN-LSTM encoder.
BIBREF13 trained several sentence encoding architectures on a combination of the SNLI and MultiNLI datasets, and showed that a BiLSTM with max-pooling was the best at producing highly transferable sentence representations. More recently, BIBREF18 empirically showed that sentence representations created in a multi-task setting BIBREF38 , performed increasingly better the more tasks they were trained in. BIBREF39 proposed using an autoencoder that relies on multi-head self-attention over the concatenation of the max and mean pooled encoder outputs for producing sentence representations. Finally, BIBREF40 show that modern sentence embedding methods are not vastly superior to random methods.
The works mentioned so far usually evaluate the quality of the produced sentence representations in sentence-level downstream tasks. Common benchmarks grouping these kind of tasks include SentEval BIBREF23 , and GLUE BIBREF41 . Another trend, however, is to probe sentence representations to understand what linguistic phenomena they encode BIBREF42 , BIBREF43 , BIBREF44 , BIBREF45 , BIBREF46 .
General Feature-wise Transformations
BIBREF47 provide a review on feature-wise transformation methods, of which the mechanisms presented in this paper form a part of. In a few words, the INLINEFORM0 parameter, in both scalar gate and vector gate mechanisms, can be understood as a scaling parameter limited to the INLINEFORM1 range and conditioned on word representations, whereas adding the scaled INLINEFORM2 and INLINEFORM3 representations can be seen as biasing word representations conditioned on character representations.
The previous review extends the work by BIBREF48 , which describes the Feature-wise Linear Modulation (FiLM) framework as a generalization of Conditional Normalization methods, and apply it in visual reasoning tasks. Some of the reported findings are that, in general, scaling has greater impact than biasing, and that in a setting similar to the scalar gate, limiting the scaling parameter to INLINEFORM0 hurt performance. Future decisions involving the design of mechanisms for combining character and word-level representations should be informed by these insights.
Conclusions
We presented an empirical study showing the effect that different ways of combining character and word representations has in word-level and sentence-level evaluation tasks.
We showed that a vector gate performed consistently better across a variety of word similarity and relatedness tasks. Additionally, despite showing inconsistent results in sentence evaluation tasks, it performed significantly better than the other methods in semantic similarity tasks.
We further showed through this mechanism, that learning character-level representations is always beneficial, and becomes increasingly so with less common words.
In the future it would be interesting to study how the choice of mechanism for combining subword and word representations affects the more recent language-model-based pretraining methods such as ELMo BIBREF49 , GPT BIBREF50 , BIBREF51 and BERT BIBREF52 .
Acknowledgements
Thanks to Edison Marrese-Taylor and Pablo Loyola for their feedback on early versions of this manuscript. We also gratefully acknowledge the support of the NVIDIA Corporation with the donation of one of the GPUs used for this research. Jorge A. Balazs is partially supported by the Japanese Government MEXT Scholarship.
Hyperparameters
We only considered words that appear at least twice, for each dataset. Those that appeared only once were considered UNK. We used the Treebank Word Tokenizer as implemented in NLTK for tokenizing the training and development datasets.
In the same fashion as conneau2017supervised, we used a batch size of 64, an SGD optmizer with an initial learning rate of INLINEFORM0 , and at each epoch divided the learning rate by 5 if the validation accuracy decreased. We also used gradient clipping when gradients where INLINEFORM1 .
We defined character vector representations as 50-dimensional vectors randomly initialized by sampling from the uniform distribution in the INLINEFORM0 range.
The output dimension of the character-level BiLSTM was 300 per direction, and remained of such size after combining forward and backward representations as depicted in eq. EQREF9 .
Word vector representations where initialized from the 300-dimensional GloVe vectors BIBREF14 , trained in 840B tokens from the Common Crawl, and finetuned during training. Words not present in the GloVe vocabulary where randomly initialized by sampling from the uniform distribution in the INLINEFORM0 range.
The input size of the word-level LSTM was 300 for every method except concat in which it was 600, and its output was always 2048 per direction, resulting in a 4096-dimensional sentence representation.
Sentence Evaluation Datasets
table:sentence-eval-datasets lists the sentence-level evaluation datasets used in this paper. The provided URLs correspond to the original sources, and not necessarily to the URLs where SentEval got the data from.
The version of the CR, MPQA, MR, and SUBJ datasets used in this paper were the ones preprocessed by BIBREF75 . Both SST2 and SST5 correspond to preprocessed versions of the SST dataset by BIBREF74 . SST2 corresponds to a subset of SST used by BIBREF54 containing flat representations of sentences annotated with binary sentiment labels, and SST5 to another subset annotated with more fine-grained sentiment labels (very negative, negative, neutral, positive, very positive). | MEN, MTurk287, MTurk771, RG, RW, SimLex999, SimVerb3500, WS353, WS353R, WS353S |
9f89bff89cea722debc991363f0826de945bc582 | 9f89bff89cea722debc991363f0826de945bc582_1 | Q: Which similarity datasets do they use?
Text: Introduction
Incorporating sub-word structures like substrings, morphemes and characters to the creation of word representations significantly increases their quality as reflected both by intrinsic metrics and performance in a wide range of downstream tasks BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 .
The reason for this improvement is related to sub-word structures containing information that is usually ignored by standard word-level models. Indeed, when representing words as vectors extracted from a lookup table, semantically related words resulting from inflectional processes such as surf, surfing, and surfed, are treated as being independent from one another. Further, word-level embeddings do not account for derivational processes resulting in syntactically-similar words with different meanings such as break, breakable, and unbreakable. This causes derived words, which are usually less frequent, to have lower-quality (or no) vector representations.
Previous works have successfully combined character-level and word-level word representations, obtaining overall better results than using only word-level representations. For example BIBREF1 achieved state-of-the-art results in a machine translation task by representing unknown words as a composition of their characters. BIBREF4 created word representations by adding the vector representations of the words' surface forms and their morphemes ( INLINEFORM0 ), obtaining significant improvements on intrinsic evaluation tasks, word similarity and machine translation. BIBREF5 concatenated character-level and word-level representations for creating word representations, and then used them as input to their models for obtaining state-of-the-art results in Named Entity Recognition on several languages.
What these works have in common is that the models they describe first learn how to represent subword information, at character BIBREF1 , morpheme BIBREF4 , or substring BIBREF0 levels, and then combine these learned representations at the word level. The incorporation of information at a finer-grained hierarchy results in higher-quality modeling of rare words, morphological processes, and semantics BIBREF6 .
There is no consensus, however, on which combination method works better in which case, or how the choice of a combination method affects downstream performance, either measured intrinsically at the word level, or extrinsically at the sentence level.
In this paper we aim to provide some intuitions about how the choice of mechanism for combining character-level with word-level representations influences the quality of the final word representations, and the subsequent effect these have in the performance of downstream tasks. Our contributions are as follows:
Background
We are interested in studying different ways of combining word representations, obtained from different hierarchies, into a single word representation. Specifically, we want to study how combining word representations (1) taken directly from a word embedding lookup table, and (2) obtained from a function over the characters composing them, affects the quality of the final word representations.
Let INLINEFORM0 be a set, or vocabulary, of words with INLINEFORM1 elements, and INLINEFORM2 a vocabulary of characters with INLINEFORM3 elements. Further, let INLINEFORM4 be a sequence of words, and INLINEFORM5 be the sequence of characters composing INLINEFORM6 . Each token INLINEFORM7 can be represented as a vector INLINEFORM8 extracted directly from an embedding lookup table INLINEFORM9 , pre-trained or otherwise, and as a vector INLINEFORM10 built from the characters that compose it; in other words, INLINEFORM11 , where INLINEFORM12 is a function that maps a sequence of characters to a vector.
The methods for combining word and character-level representations we study, are of the form INLINEFORM0 where INLINEFORM1 is the final word representation.
Mapping Characters to Character-level Word Representations
The function INLINEFORM0 is composed of an embedding layer, an optional context function, and an aggregation function.
The embedding layer transforms each character INLINEFORM0 into a vector INLINEFORM1 of dimension INLINEFORM2 , by directly taking it from a trainable embedding lookup table INLINEFORM3 . We define the matrix representation of word INLINEFORM4 as INLINEFORM5 .
The context function takes INLINEFORM0 as input and returns a context-enriched matrix representation INLINEFORM1 , in which each INLINEFORM2 contains a measure of information about its context, and interactions with its neighbors. In particular, we chose to do this by feeding INLINEFORM3 to a BiLSTM BIBREF7 , BIBREF8 .
Informally, we can think of LSTM BIBREF10 as a function INLINEFORM0 that takes a matrix INLINEFORM1 as input and returns a context-enriched matrix representation INLINEFORM2 , where each INLINEFORM3 encodes information about the previous elements INLINEFORM4 .
A BiLSTM is simply composed of 2 LSTM, one that reads the input from left to right (forward), and another that does so from right to left (backward). The output of the forward and backward LSTM are INLINEFORM0 and INLINEFORM1 respectively. In the backward case the LSTM reads INLINEFORM2 first and INLINEFORM3 last, therefore INLINEFORM4 will encode the context from INLINEFORM5 .
The aggregation function takes the context-enriched matrix representation of word INLINEFORM0 for both directions, INLINEFORM1 and INLINEFORM2 , and returns a single vector INLINEFORM3 . To do so we followed BIBREF11 , and defined the character-level representation INLINEFORM4 of word INLINEFORM5 as the linear combination of the forward and backward last hidden states returned by the context function: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are trainable parameters, and INLINEFORM2 represents the concatenation operation between two vectors.
Combining Character and Word-level Representations
We tested three different methods for combining INLINEFORM0 with INLINEFORM1 : simple concatenation, a learned scalar gate BIBREF11 , and a learned vector gate (also referred to as feature-wise sigmoidal gate). Additionally, we compared these methods to two baselines: using pre-trained word vectors only, and using character-only features for representing words. See fig:methods for a visual description of the proposed methods.
word-only (w) considers only INLINEFORM0 and ignores INLINEFORM1 : DISPLAYFORM0
char-only (c) considers only INLINEFORM0 and ignores INLINEFORM1 : DISPLAYFORM0
concat (cat) concatenates both word and character-level representations: DISPLAYFORM0
scalar gate (sg) implements the scalar gating mechanism described by BIBREF11 : DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are trainable parameters, INLINEFORM2 , and INLINEFORM3 is the sigmoid function.
vector gate (vg): DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are trainable parameters, INLINEFORM2 , INLINEFORM3 is the element-wise sigmoid function, INLINEFORM4 is the element-wise product for vectors, and INLINEFORM5 is a vector of ones.
The vector gate is inspired by BIBREF11 and BIBREF12 , but is different to the former in that the gating mechanism acts upon each dimension of the word and character-level vectors, and different to the latter in that it does not rely on external sources of information for calculating the gating mechanism.
Finally, note that word only and char only are special cases of both gating mechanisms: INLINEFORM0 (scalar gate) and INLINEFORM1 (vector gate) correspond to word only; INLINEFORM2 and INLINEFORM3 correspond to char only.
Obtaining Sentence Representations
To enable sentence-level classification we need to obtain a sentence representation from the word vectors INLINEFORM0 . We achieved this by using a BiLSTM with max pooling, which was shown to be a good universal sentence encoding mechanism BIBREF13 .
Let INLINEFORM0 , be an input sentence and INLINEFORM1 its matrix representation, where each INLINEFORM2 was obtained by one of the methods described in subsec:methods. INLINEFORM3 is the context-enriched matrix representation of INLINEFORM4 obtained by feeding INLINEFORM5 to a BiLSTM of output dimension INLINEFORM6 . Lastly, INLINEFORM11 is the final sentence representation of INLINEFORM12 obtained by max-pooling INLINEFORM13 along the sequence dimension.
Finally, we initialized the word representations INLINEFORM0 using GloVe embeddings BIBREF14 , and fine-tuned them during training. Refer to app:hyperparams for details on the other hyperparameters we used.
Experimental Setup
We trained our models for solving the Natural Language Inference (NLI) task in two datasets, SNLI BIBREF15 and MultiNLI BIBREF16 , and validated them in each corresponding development set (including the matched and mismatched development sets of MultiNLI).
For each dataset-method combination we trained 7 models initialized with different random seeds, and saved each when it reached its best validation accuracy. We then evaluated the quality of each trained model's word representations INLINEFORM0 in 10 word similarity tasks, using the system created by BIBREF17 .
Finally, we fed these obtained word vectors to a BiLSTM with max-pooling and evaluated the final sentence representations in 11 downstream transfer tasks BIBREF13 , BIBREF18 .
Datasets
Word-level Semantic Similarity A desirable property of vector representations of words is that semantically similar words should have similar vector representations. Assessing whether a set of word representations possesses this quality is referred to as the semantic similarity task. This is the most widely-used evaluation method for evaluating word representations, despite its shortcomings BIBREF20 .
This task consists of comparing the similarity between word vectors measured by a distance metric (usually cosine distance), with a similarity score obtained from human judgements. High correlation between these similarities is an indicator of good performance.
A problem with this formulation though, is that the definition of “similarity” often confounds the meaning of both similarity and relatedness. For example, cup and tea are related but dissimilar words, and this type of distinction is not always clear BIBREF21 , BIBREF22 .
To face the previous problem, we tested our methods in a wide variety of datasets, including some that explicitly model relatedness (WS353R), some that explicitly consider similarity (WS353S, SimLex999, SimVerb3500), and some where the distinction is not clear (MEN, MTurk287, MTurk771, RG, WS353). We also included the RareWords (RW) dataset for evaluating the quality of rare word representations. See appendix:datasets for a more complete description of the datasets we used.
Sentence-level Evaluation Tasks Unlike word-level representations, there is no consensus on the desirable properties sentence representations should have. In response to this, BIBREF13 created SentEval, a sentence representation evaluation benchmark designed for assessing how well sentence representations perform in various downstream tasks BIBREF23 .
Some of the datasets included in SentEval correspond to sentiment classification (CR, MPQA, MR, SST2, and SST5), subjectivity classification (SUBJ), question-type classification (TREC), recognizing textual entailment (SICK E), estimating semantic relatedness (SICK R), and measuring textual semantic similarity (STS16, STSB). The datasets are described by BIBREF13 , and we provide pointers to their original sources in the appendix table:sentence-eval-datasets.
To evaluate these sentence representations SentEval trained a linear model on top of them, and evaluated their performance in the validation sets accompanying each dataset. The only exception was the STS16 task, in which our representations were evaluated directly.
Word Similarity
table:wordlevelresults shows the quality of word representations in terms of the correlation between word similarity scores obtained by the proposed models and word similarity scores defined by humans.
First, we can see that for each task, character only models had significantly worse performance than every other model trained on the same dataset. The most likely explanation for this is that these models are the only ones that need to learn word representations from scratch, since they have no access to the global semantic knowledge encoded by the GloVe embeddings.
Further, bold results show the overall trend that vector gates outperformed the other methods regardless of training dataset. This implies that learning how to combine character and word-level representations at the dimension level produces word vector representations that capture a notion of word similarity and relatedness that is closer to that of humans.
Additionally, results from the MNLI row in general, and underlined results in particular, show that training on MultiNLI produces word representations better at capturing word similarity. This is probably due to MultiNLI data being richer than that of SNLI. Indeed, MultiNLI data was gathered from various sources (novels, reports, letters, and telephone conversations, among others), rather than the single image captions dataset from which SNLI was created.
Exceptions to the previous rule are models evaluated in MEN and RW. The former case can be explained by the MEN dataset containing only words that appear as image labels in the ESP-Game and MIRFLICKR-1M image datasets BIBREF24 , and therefore having data that is more closely distributed to SNLI than to MultiNLI.
More notably, in the RareWords dataset BIBREF25 , the word only, concat, and scalar gate methods performed equally, despite having been trained in different datasets ( INLINEFORM0 ), and the char only method performed significantly worse when trained in MultiNLI. The vector gate, however, performed significantly better than its counterpart trained in SNLI. These facts provide evidence that this method is capable of capturing linguistic phenomena that the other methods are unable to model.
table:word-similarity-dataset lists the word-similarity datasets and their corresponding reference. As mentioned in subsec:datasets, all the word-similarity datasets contain pairs of words annotated with similarity or relatedness scores, although this difference is not always explicit. Below we provide some details for each.
MEN contains 3000 annotated word pairs with integer scores ranging from 0 to 50. Words correspond to image labels appearing in the ESP-Game and MIRFLICKR-1M image datasets.
MTurk287 contains 287 annotated pairs with scores ranging from 1.0 to 5.0. It was created from words appearing in both DBpedia and in news articles from The New York Times.
MTurk771 contains 771 annotated pairs with scores ranging from 1.0 to 5.0, with words having synonymy, holonymy or meronymy relationships sampled from WordNet BIBREF56 .
RG contains 65 annotated pairs with scores ranging from 0.0 to 4.0 representing “similarity of meaning”.
RW contains 2034 pairs of words annotated with similarity scores in a scale from 0 to 10. The words included in this dataset were obtained from Wikipedia based on their frequency, and later filtered depending on their WordNet synsets, including synonymy, hyperonymy, hyponymy, holonymy and meronymy. This dataset was created with the purpose of testing how well models can represent rare and complex words.
SimLex999 contains 999 word pairs annotated with similarity scores ranging from 0 to 10. In this case the authors explicitly considered similarity and not relatedness, addressing the shortcomings of datasets that do not, such as MEN and WS353. Words include nouns, adjectives and verbs.
SimVerb3500 contains 3500 verb pairs annotated with similarity scores ranging from 0 to 10. Verbs were obtained from the USF free association database BIBREF66 , and VerbNet BIBREF63 . This dataset was created to address the lack of representativity of verbs in SimLex999, and the fact that, at the time of creation, the best performing models had already surpassed inter-annotator agreement in verb similarity evaluation resources. Like SimLex999, this dataset also explicitly considers similarity as opposed to relatedness.
WS353 contains 353 word pairs annotated with similarity scores from 0 to 10.
WS353R is a subset of WS353 containing 252 word pairs annotated with relatedness scores. This dataset was created by asking humans to classify each WS353 word pair into one of the following classes: synonyms, antonyms, identical, hyperonym-hyponym, hyponym-hyperonym, holonym-meronym, meronym-holonym, and none-of-the-above. These annotations were later used to group the pairs into: similar pairs (synonyms, antonyms, identical, hyperonym-hyponym, and hyponym-hyperonym), related pairs (holonym-meronym, meronym-holonym, and none-of-the-above with a human similarity score greater than 5), and unrelated pairs (classified as none-of-the-above with a similarity score less than or equal to 5). This dataset is composed by the union of related and unrelated pairs.
WS353S is another subset of WS353 containing 203 word pairs annotated with similarity scores. This dataset is composed by the union of similar and unrelated pairs, as described previously.
Word Frequencies and Gating Values
fig:gatingviz shows that for more common words the vector gate mechanism tends to favor only a few dimensions while keeping a low average gating value across dimensions. On the other hand, values are greater and more homogeneous across dimensions in rarer words. Further, fig:freqvsgatevalue shows this mechanism assigns, on average, a greater gating value to less frequent words, confirming the findings by BIBREF11 , and BIBREF12 .
In other words, the less frequent the word, the more this mechanism allows the character-level representation to influence the final word representation, as shown by eq:vg. A possible interpretation of this result is that exploiting character information becomes increasingly necessary as word-level representations' quality decrease.
Another observable trend in both figures is that gating values tend to be low on average. Indeed, it is possible to see in fig:freqvsgatevalue that the average gating values range from INLINEFORM0 to INLINEFORM1 . This result corroborates the findings by BIBREF11 , stating that setting INLINEFORM2 in eq:scalar-gate, was better than setting it to higher values.
In summary, the gating mechanisms learn how to compensate the lack of expressivity of underrepresented words by selectively combining their representations with those of characters.
Sentence-level Evaluation
table:sentlevelresults shows the impact that different methods for combining character and word-level word representations have in the quality of the sentence representations produced by our models.
We can observe the same trend mentioned in subsec:word-similarity-eval, and highlighted by the difference between bold values, that models trained in MultiNLI performed better than those trained in SNLI at a statistically significant level, confirming the findings of BIBREF13 . In other words, training sentence encoders on MultiNLI yields more general sentence representations than doing so on SNLI.
The two exceptions to the previous trend, SICKE and SICKR, benefited more from models trained on SNLI. We hypothesize this is again due to both SNLI and SICK BIBREF26 having similar data distributions.
Additionally, there was no method that significantly outperformed the word only baseline in classification tasks. This means that the added expressivity offered by explicitly modeling characters, be it through concatenation or gating, was not significantly better than simply fine-tuning the pre-trained GloVe embeddings for this type of task. We hypothesize this is due to the conflation of two effects. First, the fact that morphological processes might not encode important information for solving these tasks; and second, that SNLI and MultiNLI belong to domains that are too dissimilar to the domains in which the sentence representations are being tested.
On the other hand, the vector gate significantly outperformed every other method in the STSB task when trained in both datasets, and in the STS16 task when trained in SNLI. This again hints at this method being capable of modeling phenomena at the word level, resulting in improved semantic representations at the sentence level.
Relationship Between Word- and Sentence-level Evaluation Tasks
It is clear that the better performance the vector gate had in word similarity tasks did not translate into overall better performance in downstream tasks. This confirms previous findings indicating that intrinsic word evaluation metrics are not good predictors of downstream performance BIBREF29 , BIBREF30 , BIBREF20 , BIBREF31 .
subfig:mnli-correlations shows that the word representations created by the vector gate trained in MultiNLI had positively-correlated results within several word-similarity tasks. This hints at the generality of the word representations created by this method when modeling similarity and relatedness.
However, the same cannot be said about sentence-level evaluation performance; there is no clear correlation between word similarity tasks and sentence-evaluation tasks. This is clearly illustrated by performance in the STSBenchmark, the only in which the vector gate was significantly superior, not being correlated with performance in any word-similarity dataset. This can be interpreted simply as word-level representations capturing word-similarity not being a sufficient condition for good performance in sentence-level tasks.
In general, fig:correlations shows that there are no general correlation effects spanning both training datasets and combination mechanisms. For example, subfig:snli-correlations shows that, for both word-only and concat models trained in SNLI, performance in word similarity tasks correlates positively with performance in most sentence evaluation tasks, however, this does not happen as clearly for the same models trained in MultiNLI (subfig:mnli-correlations).
Gating Mechanisms for Combining Characters and Word Representations
To the best of our knowledge, there are only two recent works that specifically study how to combine word and subword-level vector representations.
BIBREF11 propose to use a trainable scalar gating mechanism capable of learning a weighting scheme for combining character-level and word-level representations. They compared their proposed method to manually weighting both levels; using characters only; words only; or their concatenation. They found that in some datasets a specific manual weighting scheme performed better, while in others the learned scalar gate did.
BIBREF12 further expand the gating concept by making the mechanism work at a finer-grained level, learning how to weight each vector's dimensions independently, conditioned on external word-level features such as part-of-speech and named-entity tags. Similarly, they compared their proposed mechanism to using words only, characters only, and a concatenation of both, with and without external features. They found that their vector gate performed better than the other methods in all the reported tasks, and beat the state of the art in two reading comprehension tasks.
Both works showed that the gating mechanisms assigned greater importance to character-level representations in rare words, and to word-level representations in common ones, reaffirming the previous findings that subword structures in general, and characters in particular, are beneficial for modeling uncommon words.
Sentence Representation Learning
The problem of representing sentences as fixed-length vectors has been widely studied.
BIBREF32 suggested a self-adaptive hierarchical model that gradually composes words into intermediate phrase representations, and adaptively selects specific hierarchical levels for specific tasks. BIBREF33 proposed an encoder-decoder model trained by attempting to reconstruct the surrounding sentences of an encoded passage, in a fashion similar to Skip-gram BIBREF34 . BIBREF35 overcame the previous model's need for ordered training sentences by using autoencoders for creating the sentence representations. BIBREF36 implemented a model simpler and faster to train than the previous two, while having competitive performance. Similar to BIBREF33 , BIBREF37 suggested predicting future sentences with a hierarchical CNN-LSTM encoder.
BIBREF13 trained several sentence encoding architectures on a combination of the SNLI and MultiNLI datasets, and showed that a BiLSTM with max-pooling was the best at producing highly transferable sentence representations. More recently, BIBREF18 empirically showed that sentence representations created in a multi-task setting BIBREF38 , performed increasingly better the more tasks they were trained in. BIBREF39 proposed using an autoencoder that relies on multi-head self-attention over the concatenation of the max and mean pooled encoder outputs for producing sentence representations. Finally, BIBREF40 show that modern sentence embedding methods are not vastly superior to random methods.
The works mentioned so far usually evaluate the quality of the produced sentence representations in sentence-level downstream tasks. Common benchmarks grouping these kind of tasks include SentEval BIBREF23 , and GLUE BIBREF41 . Another trend, however, is to probe sentence representations to understand what linguistic phenomena they encode BIBREF42 , BIBREF43 , BIBREF44 , BIBREF45 , BIBREF46 .
General Feature-wise Transformations
BIBREF47 provide a review on feature-wise transformation methods, of which the mechanisms presented in this paper form a part of. In a few words, the INLINEFORM0 parameter, in both scalar gate and vector gate mechanisms, can be understood as a scaling parameter limited to the INLINEFORM1 range and conditioned on word representations, whereas adding the scaled INLINEFORM2 and INLINEFORM3 representations can be seen as biasing word representations conditioned on character representations.
The previous review extends the work by BIBREF48 , which describes the Feature-wise Linear Modulation (FiLM) framework as a generalization of Conditional Normalization methods, and apply it in visual reasoning tasks. Some of the reported findings are that, in general, scaling has greater impact than biasing, and that in a setting similar to the scalar gate, limiting the scaling parameter to INLINEFORM0 hurt performance. Future decisions involving the design of mechanisms for combining character and word-level representations should be informed by these insights.
Conclusions
We presented an empirical study showing the effect that different ways of combining character and word representations has in word-level and sentence-level evaluation tasks.
We showed that a vector gate performed consistently better across a variety of word similarity and relatedness tasks. Additionally, despite showing inconsistent results in sentence evaluation tasks, it performed significantly better than the other methods in semantic similarity tasks.
We further showed through this mechanism, that learning character-level representations is always beneficial, and becomes increasingly so with less common words.
In the future it would be interesting to study how the choice of mechanism for combining subword and word representations affects the more recent language-model-based pretraining methods such as ELMo BIBREF49 , GPT BIBREF50 , BIBREF51 and BERT BIBREF52 .
Acknowledgements
Thanks to Edison Marrese-Taylor and Pablo Loyola for their feedback on early versions of this manuscript. We also gratefully acknowledge the support of the NVIDIA Corporation with the donation of one of the GPUs used for this research. Jorge A. Balazs is partially supported by the Japanese Government MEXT Scholarship.
Hyperparameters
We only considered words that appear at least twice, for each dataset. Those that appeared only once were considered UNK. We used the Treebank Word Tokenizer as implemented in NLTK for tokenizing the training and development datasets.
In the same fashion as conneau2017supervised, we used a batch size of 64, an SGD optmizer with an initial learning rate of INLINEFORM0 , and at each epoch divided the learning rate by 5 if the validation accuracy decreased. We also used gradient clipping when gradients where INLINEFORM1 .
We defined character vector representations as 50-dimensional vectors randomly initialized by sampling from the uniform distribution in the INLINEFORM0 range.
The output dimension of the character-level BiLSTM was 300 per direction, and remained of such size after combining forward and backward representations as depicted in eq. EQREF9 .
Word vector representations where initialized from the 300-dimensional GloVe vectors BIBREF14 , trained in 840B tokens from the Common Crawl, and finetuned during training. Words not present in the GloVe vocabulary where randomly initialized by sampling from the uniform distribution in the INLINEFORM0 range.
The input size of the word-level LSTM was 300 for every method except concat in which it was 600, and its output was always 2048 per direction, resulting in a 4096-dimensional sentence representation.
Sentence Evaluation Datasets
table:sentence-eval-datasets lists the sentence-level evaluation datasets used in this paper. The provided URLs correspond to the original sources, and not necessarily to the URLs where SentEval got the data from.
The version of the CR, MPQA, MR, and SUBJ datasets used in this paper were the ones preprocessed by BIBREF75 . Both SST2 and SST5 correspond to preprocessed versions of the SST dataset by BIBREF74 . SST2 corresponds to a subset of SST used by BIBREF54 containing flat representations of sentences annotated with binary sentiment labels, and SST5 to another subset annotated with more fine-grained sentiment labels (very negative, negative, neutral, positive, very positive). | WS353S, SimLex999, SimVerb3500 |
735f58e28d84ee92024a36bc348cfac2ee114409 | 735f58e28d84ee92024a36bc348cfac2ee114409_0 | Q: Are there datasets with relation tuples annotated, how big are datasets available?
Text: Introduction
Distantly-supervised information extraction systems extract relation tuples with a set of pre-defined relations from text. Traditionally, researchers BIBREF0, BIBREF1, BIBREF2 use pipeline approaches where a named entity recognition (NER) system is used to identify the entities in a sentence and then a classifier is used to find the relation (or no relation) between them. However, due to the complete separation of entity detection and relation classification, these models miss the interaction between multiple relation tuples present in a sentence.
Recently, several neural network-based models BIBREF3, BIBREF4 were proposed to jointly extract entities and relations from a sentence. These models used a parameter-sharing mechanism to extract the entities and relations in the same network. But they still find the relations after identifying all the entities and do not fully capture the interaction among multiple tuples. BIBREF5 (BIBREF5) proposed a joint extraction model based on neural sequence tagging scheme. But their model could not extract tuples with overlapping entities in a sentence as it could not assign more than one tag to a word. BIBREF6 (BIBREF6) proposed a neural encoder-decoder model for extracting relation tuples with overlapping entities. However, they used a copy mechanism to copy only the last token of the entities, thus this model could not extract the full entity names. Also, their best performing model used a separate decoder to extract each tuple which limited the power of their model. This model was trained with a fixed number of decoders and could not extract tuples beyond that number during inference. Encoder-decoder models are powerful models and they are successful in many NLP tasks such as machine translation, sentence generation from structured data, and open information extraction.
In this paper, we explore how encoder-decoder models can be used effectively for extracting relation tuples from sentences. There are three major challenges in this task: (i) The model should be able to extract entities and relations together. (ii) It should be able to extract multiple tuples with overlapping entities. (iii) It should be able to extract exactly two entities of a tuple with their full names. To address these challenges, we propose two novel approaches using encoder-decoder architecture. We first propose a new representation scheme for relation tuples (Table TABREF1) such that it can represent multiple tuples with overlapping entities and different lengths of entities in a simple way. We employ an encoder-decoder model where the decoder extracts one word at a time like machine translation models. At the end of sequence generation, due to the unique representation of the tuples, we can extract the tuples from the sequence of words. Although this model performs quite well, generating one word at a time is somewhat unnatural for this task. Each tuple has exactly two entities and one relation, and each entity appears as a continuous text span in a sentence. The most effective way to identify them is to find their start and end location in the sentence. Each relation tuple can then be represented using five items: start and end location of the two entities and the relation between them (see Table TABREF1). Keeping this in mind, we propose a pointer network-based decoding framework. This decoder consists of two pointer networks which find the start and end location of the two entities in a sentence, and a classification network which identifies the relation between them. At every time step of the decoding, this decoder extracts an entire relation tuple, not just a word. Experiments on the New York Times (NYT) datasets show that our approaches work effectively for this task and achieve state-of-the-art performance. To summarize, the contributions of this paper are as follows:
(1) We propose a new representation scheme for relation tuples such that an encoder-decoder model, which extracts one word at each time step, can still find multiple tuples with overlapping entities and tuples with multi-token entities from sentences. We also propose a masking-based copy mechanism to extract the entities from the source sentence only.
(2) We propose a modification in the decoding framework with pointer networks to make the encoder-decoder model more suitable for this task. At every time step, this decoder extracts an entire relation tuple, not just a word. This new decoding framework helps in speeding up the training process and uses less resources (GPU memory). This will be an important factor when we move from sentence-level tuple extraction to document-level extraction.
(3) Experiments on the NYT datasets show that our approaches outperform all the previous state-of-the-art models significantly and set a new benchmark on these datasets.
Task Description
A relation tuple consists of two entities and a relation. Such tuples can be found in sentences where an entity is a text span in a sentence and a relation comes from a pre-defined set $R$. These tuples may share one or both entities among them. Based on this, we divide the sentences into three classes: (i) No Entity Overlap (NEO): A sentence in this class has one or more tuples, but they do not share any entities. (ii) Entity Pair Overlap (EPO): A sentence in this class has more than one tuple, and at least two tuples share both the entities in the same or reverse order. (iii) Single Entity Overlap (SEO): A sentence in this class has more than one tuple and at least two tuples share exactly one entity. It should be noted that a sentence can belong to both EPO and SEO classes. Our task is to extract all relation tuples present in a sentence.
Encoder-Decoder Architecture
In this task, input to the system is a sequence of words, and output is a set of relation tuples. In our first approach, we represent each tuple as entity1 ; entity2 ; relation. We use `;' as a separator token to separate the tuple components. Multiple tuples are separated using the `$\vert $' token. We have included one example of such representation in Table TABREF1. Multiple relation tuples with overlapping entities and different lengths of entities can be represented in a simple way using these special tokens (; and $\vert $). During inference, after the end of sequence generation, relation tuples can be extracted easily using these special tokens. Due to this uniform representation scheme, where entity tokens, relation tokens, and special tokens are treated similarly, we use a shared vocabulary between the encoder and decoder which includes all of these tokens. The input sentence contains clue words for every relation which can help generate the relation tokens. We use two special tokens so that the model can distinguish between the beginning of a relation tuple and the beginning of a tuple component. To extract the relation tuples from a sentence using the encoder-decoder model, the model has to generate the entity tokens, find relation clue words and map them to the relation tokens, and generate the special tokens at appropriate time. Our experiments show that the encoder-decoder models can achieve this quite effectively.
Encoder-Decoder Architecture ::: Embedding Layer & Encoder
We create a single vocabulary $V$ consisting of the source sentence tokens, relation names from relation set $R$, special separator tokens (`;', `$\vert $'), start-of-target-sequence token (SOS), end-of-target-sequence token (EOS), and unknown word token (UNK). Word-level embeddings are formed by two components: (1) pre-trained word vectors (2) character embedding-based feature vectors. We use a word embedding layer $\mathbf {E}_w \in \mathbb {R}^{\vert V \vert \times d_w}$ and a character embedding layer $\mathbf {E}_c \in \mathbb {R}^{\vert A \vert \times d_c}$, where $d_w$ is the dimension of word vectors, $A$ is the character alphabet of input sentence tokens, and $d_c$ is the dimension of character embedding vectors. Following BIBREF7 (BIBREF7), we use a convolutional neural network with max-pooling to extract a feature vector of size $d_f$ for every word. Word embeddings and character embedding-based feature vectors are concatenated ($\Vert $) to obtain the representation of the input tokens.
A source sentence $\mathbf {S}$ is represented by vectors of its tokens $\mathbf {x}_1, \mathbf {x}_2,....,\mathbf {x}_n$, where $\mathbf {x}_i \in \mathbb {R}^{(d_w+d_f)}$ is the vector representation of the $i$th word and $n$ is the length of $\mathbf {S}$. These vectors $\mathbf {x}_i$ are passed to a bi-directional LSTM BIBREF8 (Bi-LSTM) to obtain the hidden representation $\mathbf {h}_i^E$. We set the hidden dimension of the forward and backward LSTM of the Bi-LSTM to be $d_h/2$ to obtain $\mathbf {h}_i^E \in \mathbb {R}^{d_h}$, where $d_h$ is the hidden dimension of the sequence generator LSTM of the decoder described below.
Encoder-Decoder Architecture ::: Word-level Decoder & Copy Mechanism
A target sequence $\mathbf {T}$ is represented by only word embedding vectors of its tokens $\mathbf {y}_0, \mathbf {y}_1,....,\mathbf {y}_m$ where $\mathbf {y}_i \in \mathbb {R}^{d_w}$ is the embedding vector of the $i$th token and $m$ is the length of the target sequence. $\mathbf {y}_0$ and $\mathbf {y}_m$ represent the embedding vector of the SOS and EOS token respectively. The decoder generates one token at a time and stops when EOS is generated. We use an LSTM as the decoder and at time step $t$, the decoder takes the source sentence encoding ($\mathbf {e}_t \in \mathbb {R}^{d_h}$) and the previous target word embedding ($\mathbf {y}_{t-1}$) as the input and generates the hidden representation of the current token ($\mathbf {h}_t^D \in \mathbb {R}^{d_h}$). The sentence encoding vector $\mathbf {e}_t$ can be obtained using attention mechanism. $\mathbf {h}_t^D$ is projected to the vocabulary $V$ using a linear layer with weight matrix $\mathbf {W}_v \in \mathbb {R}^{\vert V \vert \times d_h}$ and bias vector $\mathbf {b}_v \in \mathbb {R}^{\vert V \vert }$ (projection layer).
$\mathbf {o}_t$ represents the normalized scores of all the words in the embedding vocabulary at time step $t$. $\mathbf {h}_{t-1}^D$ is the previous hidden state of the LSTM.
The projection layer of the decoder maps the decoder output to the entire vocabulary. During training, we use the gold label target tokens directly. However, during inference, the decoder may predict a token from the vocabulary which is not present in the current sentence or the set of relations or the special tokens. To prevent this, we use a masking technique while applying the softmax operation at the projection layer. We mask (exclude) all words of the vocabulary except the current source sentence tokens, relation tokens, separator tokens (`;', `$\vert $'), UNK, and EOS tokens in the softmax operation. To mask (exclude) some word from softmax, we set the corresponding value in $\hat{\mathbf {o}}_t$ at $-\infty $ and the corresponding softmax score will be zero. This ensures the copying of entities from the source sentence only. We include the UNK token in the softmax operation to make sure that the model generates new entities during inference. If the decoder predicts an UNK token, we replace it with the corresponding source word which has the highest attention score. During inference, after decoding is finished, we extract all tuples based on the special tokens, remove duplicate tuples and tuples in which both entities are the same or tuples where the relation token is not from the relation set. This model is referred to as WordDecoding (WDec) henceforth.
Encoder-Decoder Architecture ::: Pointer Network-Based Decoder
In the second approach, we identify the entities in the sentence using their start and end locations. We remove the special tokens and relation names from the word vocabulary and word embeddings are used only at the encoder side along with character embeddings. We use an additional relation embedding matrix $\mathbf {E}_r \in \mathbb {R}^{\vert R \vert \times d_r}$ at the decoder side of our model, where $R$ is the set of relations and $d_r$ is the dimension of relation vectors. The relation set $R$ includes a special relation token EOS which indicates the end of the sequence. Relation tuples are represented as a sequence $T=y_0, y_1,....,y_m$, where $y_t$ is a tuple consisting of four indexes in the source sentence indicating the start and end location of the two entities and a relation between them (see Table TABREF1). $y_0$ is a dummy tuple that represents the start tuple of the sequence and $y_m$ functions as the end tuple of the sequence which has EOS as the relation (entities are ignored for this tuple). The decoder consists of an LSTM with hidden dimension $d_h$ to generate the sequence of tuples, two pointer networks to find the two entities, and a classification network to find the relation of a tuple. At time step $t$, the decoder takes the source sentence encoding ($\mathbf {e}_t \in \mathbb {R}^{d_h}$) and the representation of all previously generated tuples ($\mathbf {y}_{prev}=\sum _{j=0}^{t-1}\mathbf {y}_{j}$) as the input and generates the hidden representation of the current tuple, $\mathbf {h}_t^D \in \mathbb {R}^{d_h}$. The sentence encoding vector $\mathbf {e}_t$ is obtained using an attention mechanism as explained later. Relation tuples are a set and to prevent the decoder from generating the same tuple again, we pass the information about all previously generated tuples at each time step of decoding. $\mathbf {y}_j$ is the vector representation of the tuple predicted at time step $j < t$ and we use the zero vector ($\mathbf {y}_0=\overrightarrow{0}$) to represent the dummy tuple $y_0$. $\mathbf {h}_{t-1}^D$ is the hidden state of the LSTM at time step $t-1$.
Encoder-Decoder Architecture ::: Relation Tuple Extraction
After obtaining the hidden representation of the current tuple $\mathbf {h}_t^D$, we first find the start and end pointers of the two entities in the source sentence. We concatenate the vector $\mathbf {h}_t^D$ with the hidden vectors $\mathbf {h}_i^E$ of the encoder and pass them to a Bi-LSTM layer with hidden dimension $d_p$ for forward and backward LSTM. The hidden vectors of this Bi-LSTM layer $\mathbf {h}_i^k \in \mathbb {R}^{2d_p}$ are passed to two feed-forward networks (FFN) with softmax to convert each hidden vector into two scalar values between 0 and 1. Softmax operation is applied across all the words in the input sentence. These two scalar values represent the probability of the corresponding source sentence token to be the start and end location of the first entity. This Bi-LSTM layer with the two feed-forward layers is the first pointer network which identifies the first entity of the current relation tuple.
where $\mathbf {W}_s^1 \in \mathbb {R}^{1 \times 2d_p}$, $\mathbf {W}_e^1 \in \mathbb {R}^{1 \times 2d_p}$, ${b}_s^1$, and ${b}_e^1$ are the weights and bias parameters of the feed-forward layers. ${s}_i^1$, ${e}_i^1$ represent the normalized probabilities of the $i$th source word being the start and end token of the first entity of the predicted tuple. We use another pointer network to extract the second entity of the tuple. We concatenate the hidden vectors $\mathbf {h}_i^k$ with $\mathbf {h}_t^D$ and $\mathbf {h}_i^E$ and pass them to the second pointer network to obtain ${s}_i^2$ and ${e}_i^2$, which represent the normalized probabilities of the $i$th source word being the start and end of the second entity. These normalized probabilities are used to find the vector representation of the two entities, $\mathbf {a}_t^1$ and $\mathbf {a}_t^2$.
We concatenate the entity vector representations $\mathbf {a}_t^1$ and $\mathbf {a}_t^2$ with $\mathbf {h}_t^D$ and pass it to a feed-forward network (FFN) with softmax to find the relation. This feed-forward layer has a weight matrix $\mathbf {W}_r \in \mathbb {R}^{\vert R \vert \times (8d_p + d_h)}$ and a bias vector $\mathbf {b}_r \in \mathbb {R}^{\vert R \vert }$.
$\mathbf {r}_t$ represents the normalized probabilities of the relation at time step $t$. The relation embedding vector $\mathbf {z}_t$ is obtained using $\mathrm {argmax}$ of $\mathbf {r}_t$ and $\mathbf {E}_r$. $\mathbf {y}_t \in \mathbb {R}^{(8d_p + d_r)}$ is the vector representation of the tuple predicted at time step $t$. During training, we pass the embedding vector of the gold label relation in place of the predicted relation. So the $\mathrm {argmax}$ function does not affect the back-propagation during training. The decoder stops the sequence generation process when the predicted relation is EOS. This is the classification network of the decoder.
During inference, we select the start and end location of the two entities such that the product of the four pointer probabilities is maximized keeping the constraints that the two entities do not overlap with each other and $1 \le b \le e \le n$ where $b$ and $e$ are the start and end location of the corresponding entities. We first choose the start and end location of entity 1 based on the maximum product of the corresponding start and end pointer probabilities. Then we find entity 2 in a similar way excluding the span of entity 1 to avoid overlap. The same procedure is repeated but this time we first find entity 2 followed by entity 1. We choose that pair of entities which gives the higher product of four pointer probabilities between these two choices. This model is referred to as PtrNetDecoding (PNDec) henceforth.
Encoder-Decoder Architecture ::: Attention Modeling
We experimented with three different attention mechanisms for our word-level decoding model to obtain the source context vector $\mathbf {e}_t$:
(1) Avg.: The context vector is obtained by averaging the hidden vectors of the encoder: $\mathbf {e}_t=\frac{1}{n}\sum _{i=1}^n \mathbf {h}_i^E$
(2) N-gram: The context vector is obtained by the N-gram attention mechanism of BIBREF9 (BIBREF9) with N=3.
$\textnormal {a}_i^g=(\mathbf {h}_n^{E})^T \mathbf {V}^g \mathbf {w}_i^g$, $\alpha ^g = \mathrm {softmax}(\mathbf {a}^g)$
$\mathbf {e}_t=[\mathbf {h}_n^E \Vert \sum _{g=1}^N \mathbf {W}^g (\sum _{i=1}^{\vert G^g \vert } \alpha _i^g \mathbf {w}_i^g)$]
Here, $\mathbf {h}_n^E$ is the last hidden state of the encoder, $g \in \lbrace 1, 2, 3\rbrace $ refers to the word gram combination, $G^g$ is the sequence of g-gram word representations for the input sentence, $\mathbf {w}_i^g$ is the $i$th g-gram vector (2-gram and 3-gram representations are obtained by average pooling), $\alpha _i^g$ is the normalized attention score for the $i$th g-gram vector, $\mathbf {W} \in \mathbb {R}^{d_h \times d_h}$ and $\mathbf {V} \in \mathbb {R}^{d_h \times d_h}$ are trainable parameters.
(3) Single: The context vector is obtained by the attention mechanism proposed by BIBREF10 (BIBREF10). This attention mechanism gives the best performance with the word-level decoding model.
$\mathbf {u}_t^i = \mathbf {W}_{u} \mathbf {h}_i^E, \quad \mathbf {q}_t^i = \mathbf {W}_{q} \mathbf {h}_{t-1}^D + \mathbf {b}_{q}$,
$\textnormal {a}_t^i = \mathbf {v}_a \tanh (\mathbf {q}_t^i + \mathbf {u}_t^i), \quad \alpha _t = \mathrm {softmax}(\mathbf {a}_t)$,
$\mathbf {e}_t = \sum _{i=1}^n \alpha _t^i \mathbf {h}_i^E$
where $\mathbf {W}_u \in \mathbb {R}^{d_h \times d_h}$, $\mathbf {W}_q \in \mathbb {R}^{d_h \times d_h}$, and $\mathbf {v}_a \in \mathbb {R}^{d_h}$ are all trainable attention parameters and $\mathbf {b}_q \in \mathbb {R}^{d_h}$ is a bias vector. $\alpha _t^i$ is the normalized attention score of the $i$th source word at the decoding time step $t$.
For our pointer network-based decoding model, we use three variants of the single attention model. First, we use $\mathbf {h}_{t-1}^D$ to calculate $\mathbf {q}_t^i$ in the attention mechanism. Next, we use $\mathbf {y}_{prev}$ to calculate $\mathbf {q}_t^i$, where $\mathbf {W}_q \in \mathbb {R}^{(8d_p + d_r) \times d_h}$. In the final variant, we obtain the attentive context vector by concatenating the two attentive vectors obtained using $\mathbf {h}_{t-1}^D$ and $\mathbf {y}_{prev}$. This gives the best performance with the pointer network-based decoding model. These variants are referred to as $\mathrm {dec_{hid}}$, $\mathrm {tup_{prev}}$, and $\mathrm {combo}$ in Table TABREF17.
Encoder-Decoder Architecture ::: Loss Function
We minimize the negative log-likelihood loss of the generated words for word-level decoding ($\mathcal {L}_{word}$) and minimize the sum of negative log-likelihood loss of relation classification and the four pointer locations for pointer network-based decoding ($\mathcal {L}_{ptr}$).
$v_t^b$ is the softmax score of the target word at time step $t$ for the word-level decoding model. $r$, $s$, and $e$ are the softmax score of the corresponding true relation label, true start and end pointer location of an entity. $b$, $t$, and $c$ refer to the $b$th training instance, $t$th time step of decoding, and the two entities of a tuple respectively. $B$ and $T$ are the batch size and maximum time step of the decoder respectively.
Experiments ::: Datasets
We focus on the task of extracting multiple tuples with overlapping entities from sentences. We choose the New York Times (NYT) corpus for our experiments. This corpus has multiple versions, and we choose the following two versions as their test dataset has significantly larger number of instances of multiple relation tuples with overlapping entities. (i) The first version is used by BIBREF6 (BIBREF6) (mentioned as NYT in their paper) and has 24 relations. We name this version as NYT24. (ii) The second version is used by BIBREF11 (BIBREF11) (mentioned as NYT10 in their paper) and has 29 relations. We name this version as NYT29. We select 10% of the original training data and use it as the validation dataset. The remaining 90% is used for training. We include statistics of the training and test datasets in Table TABREF11.
Experiments ::: Parameter Settings
We run the Word2Vec BIBREF12 tool on the NYT corpus to initialize the word embeddings. The character embeddings and relation embeddings are initialized randomly. All embeddings are updated during training. We set the word embedding dimension $d_w=300$, relation embedding dimension $d_r=300$, character embedding dimension $d_c=50$, and character-based word feature dimension $d_f=50$. To extract the character-based word feature vector, we set the CNN filter width at 3 and the maximum length of a word at 10. The hidden dimension $d_h$ of the decoder LSTM cell is set at 300 and the hidden dimension of the forward and the backward LSTM of the encoder is set at 150. The hidden dimension of the forward and backward LSTM of the pointer networks is set at $d_p=300$. The model is trained with mini-batch size of 32 and the network parameters are optimized using Adam BIBREF13. Dropout layers with a dropout rate fixed at $0.3$ are used in our network to avoid overfitting.
Experiments ::: Baselines and Evaluation Metrics
We compare our model with the following state-of-the-art joint entity and relation extraction models:
(1) SPTree BIBREF4: This is an end-to-end neural entity and relation extraction model using sequence LSTM and Tree LSTM. Sequence LSTM is used to identify all the entities first and then Tree LSTM is used to find the relation between all pairs of entities.
(2) Tagging BIBREF5: This is a neural sequence tagging model which jointly extracts the entities and relations using an LSTM encoder and an LSTM decoder. They used a Cartesian product of entity tags and relation tags to encode the entity and relation information together. This model does not work when tuples have overlapping entities.
(3) CopyR BIBREF6: This model uses an encoder-decoder approach for joint extraction of entities and relations. It copies only the last token of an entity from the source sentence. Their best performing multi-decoder model is trained with a fixed number of decoders where each decoder extracts one tuple.
(4) HRL BIBREF11: This model uses a reinforcement learning (RL) algorithm with two levels of hierarchy for tuple extraction. A high-level RL finds the relation and a low-level RL identifies the two entities using a sequence tagging approach. This sequence tagging approach cannot always ensure extraction of exactly two entities.
(5) GraphR BIBREF14: This model considers each token in a sentence as a node in a graph, and edges connecting the nodes as relations between them. They use graph convolution network (GCN) to predict the relations of every edge and then filter out some of the relations.
(6) N-gram Attention BIBREF9: This model uses an encoder-decoder approach with N-gram attention mechanism for knowledge-base completion using distantly supervised data. The encoder uses the source tokens as its vocabulary and the decoder uses the entire Wikidata BIBREF15 entity IDs and relation IDs as its vocabulary. The encoder takes the source sentence as input and the decoder outputs the two entity IDs and relation ID for every tuple. During training, it uses the mapping of entity names and their Wikidata IDs of the entire Wikidata for proper alignment. Our task of extracting relation tuples with the raw entity names from a sentence is more challenging since entity names are not of fixed length. Our more generic approach is also helpful for extracting new entities which are not present in the existing knowledge bases such as Wikidata. We use their N-gram attention mechanism in our model to compare its performance with other attention models (Table TABREF17).
We use the same evaluation method used by BIBREF11 (BIBREF11) in their experiments. We consider the extracted tuples as a set and remove the duplicate tuples. An extracted tuple is considered as correct if the corresponding full entity names are correct and the relation is also correct. We report precision, recall, and F1 score for comparison.
Experiments ::: Experimental Results
Among the baselines, HRL achieves significantly higher F1 scores on the two datasets. We run their model and our models five times and report the median results in Table TABREF15. Scores of other baselines in Table TABREF15 are taken from previous published papers BIBREF6, BIBREF11, BIBREF14. Our WordDecoding (WDec) model achieves F1 scores that are $3.9\%$ and $4.1\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. Similarly, our PtrNetDecoding (PNDec) model achieves F1 scores that are $3.0\%$ and $1.3\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. We perform a statistical significance test (t-test) under a bootstrap pairing between HRL and our models and see that the higher F1 scores achieved by our models are statistically significant ($p < 0.001$). Next, we combine the outputs of five runs of our models and five runs of HRL to build ensemble models. For a test instance, we include those tuples which are extracted in the majority ($\ge 3$) of the five runs. This ensemble mechanism increases the precision significantly on both datasets with a small improvement in recall as well. In the ensemble scenario, compared to HRL, WDec achieves $4.2\%$ and $3.5\%$ higher F1 scores and PNDec achieves $4.2\%$ and $2.9\%$ higher F1 scores on the NYT29 and NYT24 datasets respectively.
Analysis and Discussion ::: Ablation Studies
We include the performance of different attention mechanisms with our WordDecoding model, effects of our masking-based copy mechanism, and ablation results of three variants of the single attention mechanism with our PtrNetDecoding model in Table TABREF17. WordDecoding with single attention achieves the highest F1 score on both datasets. We also see that our copy mechanism improves F1 scores by around 4–7% in each attention mechanism with both datasets. PtrNetDecoding achieves the highest F1 scores when we combine the two attention mechanisms with respect to the previous hidden vector of the decoder LSTM ($\mathbf {h}_{t-1}^D$) and representation of all previously extracted tuples ($\mathbf {y}_{prev}$).
Analysis and Discussion ::: Performance Analysis
From Table TABREF15, we see that CopyR, HRL, and our models achieve significantly higher F1 scores on the NYT24 dataset than the NYT29 dataset. Both datasets have a similar set of relations and similar texts (NYT). So task-wise both datasets should pose a similar challenge. However, the F1 scores suggest that the NYT24 dataset is easier than NYT29. The reason is that NYT24 has around 72.0% of overlapping tuples between the training and test data (% of test tuples that appear in the training data with different source sentences). In contrast, NYT29 has only 41.7% of overlapping tuples. Due to the memorization power of deep neural networks, it can achieve much higher F1 score on NYT24. The difference between the F1 scores of WordDecoding and PtrNetDecoding on NYT24 is marginally higher than NYT29, since WordDecoding has more trainable parameters (about 27 million) than PtrNetDecoding (about 24.5 million) and NYT24 has very high tuple overlap. However, their ensemble versions achieve closer F1 scores on both datasets.
Despite achieving marginally lower F1 scores, the pointer network-based model can be considered more intuitive and suitable for this task. WordDecoding may not extract the special tokens and relation tokens at the right time steps, which is critical for finding the tuples from the generated sequence of words. PtrNetDecoding always extracts two entities of varying length and a relation for every tuple. We also observe that PtrNetDecoding is more than two times faster and takes one-third of the GPU memory of WordDecoding during training and inference. This speedup and smaller memory consumption are achieved due to the fewer number of decoding steps of PtrNetDecoding compared to WordDecoding. PtrNetDecoding extracts an entire tuple at each time step, whereas WordDecoding extracts just one word at each time step and so requires eight time steps on average to extract a tuple (assuming that the average length of an entity is two). The softmax operation at the projection layer of WordDecoding is applied across the entire vocabulary and the vocabulary size can be large (more than 40,000 for our datasets). In case of PtrNetDecoding, the softmax operation is applied across the sentence length (maximum of 100 in our experiments) and across the relation set (24 and 29 for our datasets). The costly softmax operation and the higher number of decoding time steps significantly increase the training and inference time for WordDecoding. The encoder-decoder model proposed by BIBREF9 (BIBREF9) faces a similar softmax-related problem as their target vocabulary contains the entire Wikidata entity IDs and relation IDs which is in the millions. HRL, which uses a deep reinforcement learning algorithm, takes around 8x more time to train than PtrNetDecoding with a similar GPU configuration. The speedup and smaller memory consumption will be useful when we move from sentence-level extraction to document-level extraction, since document length is much higher than sentence length and a document contains a higher number of tuples.
Analysis and Discussion ::: Error Analysis
The relation tuples extracted by a joint model can be erroneous for multiple reasons such as: (i) extracted entities are wrong; (ii) extracted relations are wrong; (iii) pairings of entities with relations are wrong. To see the effects of the first two reasons, we analyze the performance of HRL and our models on entity generation and relation generation separately. For entity generation, we only consider those entities which are part of some tuple. For relation generation, we only consider the relations of the tuples. We include the performance of our two models and HRL on entity generation and relation generation in Table TABREF20. Our proposed models perform better than HRL on both tasks. Comparing our two models, PtrNetDecoding performs better than WordDecoding on both tasks, although WordDecoding achieves higher F1 scores in tuple extraction. This suggests that PtrNetDecoding makes more errors while pairing the entities with relations. We further analyze the outputs of our models and HRL to determine the errors due to ordering of entities (Order), mismatch of the first entity (Ent1), and mismatch of the second entity (Ent2) in Table TABREF21. WordDecoding generates fewer errors than the other two models in all the categories and thus achieves the highest F1 scores on both datasets.
Related Work
Traditionally, researchers BIBREF0, BIBREF1, BIBREF2, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25 used a pipeline approach for relation tuple extraction where relations were identified using a classification network after all entities were detected. BIBREF26 (BIBREF26) used an encoder-decoder model to extract multiple relations present between two given entities.
Recently, some researchers BIBREF3, BIBREF4, BIBREF27, BIBREF28 tried to bring these two tasks closer together by sharing their parameters and optimizing them together. BIBREF5 (BIBREF5) used a sequence tagging scheme to jointly extract the entities and relations. BIBREF6 (BIBREF6) proposed an encoder-decoder model with copy mechanism to extract relation tuples with overlapping entities. BIBREF11 (BIBREF11) proposed a joint extraction model based on reinforcement learning (RL). BIBREF14 (BIBREF14) used a graph convolution network (GCN) where they treated each token in a sentence as a node in a graph and edges were considered as relations. BIBREF9 (BIBREF9) used an N-gram attention mechanism with an encoder-decoder model for completion of knowledge bases using distant supervised data.
Encoder-decoder models have been used for many NLP applications such as neural machine translation BIBREF29, BIBREF10, BIBREF30, sentence generation from structured data BIBREF31, BIBREF32, and open information extraction BIBREF33, BIBREF34. Pointer networks BIBREF35 have been used to extract a text span from text for tasks such as question answering BIBREF36, BIBREF37. For the first time, we use pointer networks with an encoder-decoder model to extract relation tuples from sentences.
Conclusion
Extracting relation tuples from sentences is a challenging task due to different length of entities, the presence of multiple tuples, and overlapping of entities among tuples. In this paper, we propose two novel approaches using encoder-decoder architecture to address this task. Experiments on the New York Times (NYT) corpus show that our proposed models achieve significantly improved new state-of-the-art F1 scores. As future work, we would like to explore our proposed models for a document-level tuple extraction task.
Acknowledgments
We would like to thank the anonymous reviewers for their valuable and constructive comments on this paper. | Yes |
710fa8b3e74ee63d2acc20af19f95f7702b7ce5e | 710fa8b3e74ee63d2acc20af19f95f7702b7ce5e_0 | Q: Which one of two proposed approaches performed better in experiments?
Text: Introduction
Distantly-supervised information extraction systems extract relation tuples with a set of pre-defined relations from text. Traditionally, researchers BIBREF0, BIBREF1, BIBREF2 use pipeline approaches where a named entity recognition (NER) system is used to identify the entities in a sentence and then a classifier is used to find the relation (or no relation) between them. However, due to the complete separation of entity detection and relation classification, these models miss the interaction between multiple relation tuples present in a sentence.
Recently, several neural network-based models BIBREF3, BIBREF4 were proposed to jointly extract entities and relations from a sentence. These models used a parameter-sharing mechanism to extract the entities and relations in the same network. But they still find the relations after identifying all the entities and do not fully capture the interaction among multiple tuples. BIBREF5 (BIBREF5) proposed a joint extraction model based on neural sequence tagging scheme. But their model could not extract tuples with overlapping entities in a sentence as it could not assign more than one tag to a word. BIBREF6 (BIBREF6) proposed a neural encoder-decoder model for extracting relation tuples with overlapping entities. However, they used a copy mechanism to copy only the last token of the entities, thus this model could not extract the full entity names. Also, their best performing model used a separate decoder to extract each tuple which limited the power of their model. This model was trained with a fixed number of decoders and could not extract tuples beyond that number during inference. Encoder-decoder models are powerful models and they are successful in many NLP tasks such as machine translation, sentence generation from structured data, and open information extraction.
In this paper, we explore how encoder-decoder models can be used effectively for extracting relation tuples from sentences. There are three major challenges in this task: (i) The model should be able to extract entities and relations together. (ii) It should be able to extract multiple tuples with overlapping entities. (iii) It should be able to extract exactly two entities of a tuple with their full names. To address these challenges, we propose two novel approaches using encoder-decoder architecture. We first propose a new representation scheme for relation tuples (Table TABREF1) such that it can represent multiple tuples with overlapping entities and different lengths of entities in a simple way. We employ an encoder-decoder model where the decoder extracts one word at a time like machine translation models. At the end of sequence generation, due to the unique representation of the tuples, we can extract the tuples from the sequence of words. Although this model performs quite well, generating one word at a time is somewhat unnatural for this task. Each tuple has exactly two entities and one relation, and each entity appears as a continuous text span in a sentence. The most effective way to identify them is to find their start and end location in the sentence. Each relation tuple can then be represented using five items: start and end location of the two entities and the relation between them (see Table TABREF1). Keeping this in mind, we propose a pointer network-based decoding framework. This decoder consists of two pointer networks which find the start and end location of the two entities in a sentence, and a classification network which identifies the relation between them. At every time step of the decoding, this decoder extracts an entire relation tuple, not just a word. Experiments on the New York Times (NYT) datasets show that our approaches work effectively for this task and achieve state-of-the-art performance. To summarize, the contributions of this paper are as follows:
(1) We propose a new representation scheme for relation tuples such that an encoder-decoder model, which extracts one word at each time step, can still find multiple tuples with overlapping entities and tuples with multi-token entities from sentences. We also propose a masking-based copy mechanism to extract the entities from the source sentence only.
(2) We propose a modification in the decoding framework with pointer networks to make the encoder-decoder model more suitable for this task. At every time step, this decoder extracts an entire relation tuple, not just a word. This new decoding framework helps in speeding up the training process and uses less resources (GPU memory). This will be an important factor when we move from sentence-level tuple extraction to document-level extraction.
(3) Experiments on the NYT datasets show that our approaches outperform all the previous state-of-the-art models significantly and set a new benchmark on these datasets.
Task Description
A relation tuple consists of two entities and a relation. Such tuples can be found in sentences where an entity is a text span in a sentence and a relation comes from a pre-defined set $R$. These tuples may share one or both entities among them. Based on this, we divide the sentences into three classes: (i) No Entity Overlap (NEO): A sentence in this class has one or more tuples, but they do not share any entities. (ii) Entity Pair Overlap (EPO): A sentence in this class has more than one tuple, and at least two tuples share both the entities in the same or reverse order. (iii) Single Entity Overlap (SEO): A sentence in this class has more than one tuple and at least two tuples share exactly one entity. It should be noted that a sentence can belong to both EPO and SEO classes. Our task is to extract all relation tuples present in a sentence.
Encoder-Decoder Architecture
In this task, input to the system is a sequence of words, and output is a set of relation tuples. In our first approach, we represent each tuple as entity1 ; entity2 ; relation. We use `;' as a separator token to separate the tuple components. Multiple tuples are separated using the `$\vert $' token. We have included one example of such representation in Table TABREF1. Multiple relation tuples with overlapping entities and different lengths of entities can be represented in a simple way using these special tokens (; and $\vert $). During inference, after the end of sequence generation, relation tuples can be extracted easily using these special tokens. Due to this uniform representation scheme, where entity tokens, relation tokens, and special tokens are treated similarly, we use a shared vocabulary between the encoder and decoder which includes all of these tokens. The input sentence contains clue words for every relation which can help generate the relation tokens. We use two special tokens so that the model can distinguish between the beginning of a relation tuple and the beginning of a tuple component. To extract the relation tuples from a sentence using the encoder-decoder model, the model has to generate the entity tokens, find relation clue words and map them to the relation tokens, and generate the special tokens at appropriate time. Our experiments show that the encoder-decoder models can achieve this quite effectively.
Encoder-Decoder Architecture ::: Embedding Layer & Encoder
We create a single vocabulary $V$ consisting of the source sentence tokens, relation names from relation set $R$, special separator tokens (`;', `$\vert $'), start-of-target-sequence token (SOS), end-of-target-sequence token (EOS), and unknown word token (UNK). Word-level embeddings are formed by two components: (1) pre-trained word vectors (2) character embedding-based feature vectors. We use a word embedding layer $\mathbf {E}_w \in \mathbb {R}^{\vert V \vert \times d_w}$ and a character embedding layer $\mathbf {E}_c \in \mathbb {R}^{\vert A \vert \times d_c}$, where $d_w$ is the dimension of word vectors, $A$ is the character alphabet of input sentence tokens, and $d_c$ is the dimension of character embedding vectors. Following BIBREF7 (BIBREF7), we use a convolutional neural network with max-pooling to extract a feature vector of size $d_f$ for every word. Word embeddings and character embedding-based feature vectors are concatenated ($\Vert $) to obtain the representation of the input tokens.
A source sentence $\mathbf {S}$ is represented by vectors of its tokens $\mathbf {x}_1, \mathbf {x}_2,....,\mathbf {x}_n$, where $\mathbf {x}_i \in \mathbb {R}^{(d_w+d_f)}$ is the vector representation of the $i$th word and $n$ is the length of $\mathbf {S}$. These vectors $\mathbf {x}_i$ are passed to a bi-directional LSTM BIBREF8 (Bi-LSTM) to obtain the hidden representation $\mathbf {h}_i^E$. We set the hidden dimension of the forward and backward LSTM of the Bi-LSTM to be $d_h/2$ to obtain $\mathbf {h}_i^E \in \mathbb {R}^{d_h}$, where $d_h$ is the hidden dimension of the sequence generator LSTM of the decoder described below.
Encoder-Decoder Architecture ::: Word-level Decoder & Copy Mechanism
A target sequence $\mathbf {T}$ is represented by only word embedding vectors of its tokens $\mathbf {y}_0, \mathbf {y}_1,....,\mathbf {y}_m$ where $\mathbf {y}_i \in \mathbb {R}^{d_w}$ is the embedding vector of the $i$th token and $m$ is the length of the target sequence. $\mathbf {y}_0$ and $\mathbf {y}_m$ represent the embedding vector of the SOS and EOS token respectively. The decoder generates one token at a time and stops when EOS is generated. We use an LSTM as the decoder and at time step $t$, the decoder takes the source sentence encoding ($\mathbf {e}_t \in \mathbb {R}^{d_h}$) and the previous target word embedding ($\mathbf {y}_{t-1}$) as the input and generates the hidden representation of the current token ($\mathbf {h}_t^D \in \mathbb {R}^{d_h}$). The sentence encoding vector $\mathbf {e}_t$ can be obtained using attention mechanism. $\mathbf {h}_t^D$ is projected to the vocabulary $V$ using a linear layer with weight matrix $\mathbf {W}_v \in \mathbb {R}^{\vert V \vert \times d_h}$ and bias vector $\mathbf {b}_v \in \mathbb {R}^{\vert V \vert }$ (projection layer).
$\mathbf {o}_t$ represents the normalized scores of all the words in the embedding vocabulary at time step $t$. $\mathbf {h}_{t-1}^D$ is the previous hidden state of the LSTM.
The projection layer of the decoder maps the decoder output to the entire vocabulary. During training, we use the gold label target tokens directly. However, during inference, the decoder may predict a token from the vocabulary which is not present in the current sentence or the set of relations or the special tokens. To prevent this, we use a masking technique while applying the softmax operation at the projection layer. We mask (exclude) all words of the vocabulary except the current source sentence tokens, relation tokens, separator tokens (`;', `$\vert $'), UNK, and EOS tokens in the softmax operation. To mask (exclude) some word from softmax, we set the corresponding value in $\hat{\mathbf {o}}_t$ at $-\infty $ and the corresponding softmax score will be zero. This ensures the copying of entities from the source sentence only. We include the UNK token in the softmax operation to make sure that the model generates new entities during inference. If the decoder predicts an UNK token, we replace it with the corresponding source word which has the highest attention score. During inference, after decoding is finished, we extract all tuples based on the special tokens, remove duplicate tuples and tuples in which both entities are the same or tuples where the relation token is not from the relation set. This model is referred to as WordDecoding (WDec) henceforth.
Encoder-Decoder Architecture ::: Pointer Network-Based Decoder
In the second approach, we identify the entities in the sentence using their start and end locations. We remove the special tokens and relation names from the word vocabulary and word embeddings are used only at the encoder side along with character embeddings. We use an additional relation embedding matrix $\mathbf {E}_r \in \mathbb {R}^{\vert R \vert \times d_r}$ at the decoder side of our model, where $R$ is the set of relations and $d_r$ is the dimension of relation vectors. The relation set $R$ includes a special relation token EOS which indicates the end of the sequence. Relation tuples are represented as a sequence $T=y_0, y_1,....,y_m$, where $y_t$ is a tuple consisting of four indexes in the source sentence indicating the start and end location of the two entities and a relation between them (see Table TABREF1). $y_0$ is a dummy tuple that represents the start tuple of the sequence and $y_m$ functions as the end tuple of the sequence which has EOS as the relation (entities are ignored for this tuple). The decoder consists of an LSTM with hidden dimension $d_h$ to generate the sequence of tuples, two pointer networks to find the two entities, and a classification network to find the relation of a tuple. At time step $t$, the decoder takes the source sentence encoding ($\mathbf {e}_t \in \mathbb {R}^{d_h}$) and the representation of all previously generated tuples ($\mathbf {y}_{prev}=\sum _{j=0}^{t-1}\mathbf {y}_{j}$) as the input and generates the hidden representation of the current tuple, $\mathbf {h}_t^D \in \mathbb {R}^{d_h}$. The sentence encoding vector $\mathbf {e}_t$ is obtained using an attention mechanism as explained later. Relation tuples are a set and to prevent the decoder from generating the same tuple again, we pass the information about all previously generated tuples at each time step of decoding. $\mathbf {y}_j$ is the vector representation of the tuple predicted at time step $j < t$ and we use the zero vector ($\mathbf {y}_0=\overrightarrow{0}$) to represent the dummy tuple $y_0$. $\mathbf {h}_{t-1}^D$ is the hidden state of the LSTM at time step $t-1$.
Encoder-Decoder Architecture ::: Relation Tuple Extraction
After obtaining the hidden representation of the current tuple $\mathbf {h}_t^D$, we first find the start and end pointers of the two entities in the source sentence. We concatenate the vector $\mathbf {h}_t^D$ with the hidden vectors $\mathbf {h}_i^E$ of the encoder and pass them to a Bi-LSTM layer with hidden dimension $d_p$ for forward and backward LSTM. The hidden vectors of this Bi-LSTM layer $\mathbf {h}_i^k \in \mathbb {R}^{2d_p}$ are passed to two feed-forward networks (FFN) with softmax to convert each hidden vector into two scalar values between 0 and 1. Softmax operation is applied across all the words in the input sentence. These two scalar values represent the probability of the corresponding source sentence token to be the start and end location of the first entity. This Bi-LSTM layer with the two feed-forward layers is the first pointer network which identifies the first entity of the current relation tuple.
where $\mathbf {W}_s^1 \in \mathbb {R}^{1 \times 2d_p}$, $\mathbf {W}_e^1 \in \mathbb {R}^{1 \times 2d_p}$, ${b}_s^1$, and ${b}_e^1$ are the weights and bias parameters of the feed-forward layers. ${s}_i^1$, ${e}_i^1$ represent the normalized probabilities of the $i$th source word being the start and end token of the first entity of the predicted tuple. We use another pointer network to extract the second entity of the tuple. We concatenate the hidden vectors $\mathbf {h}_i^k$ with $\mathbf {h}_t^D$ and $\mathbf {h}_i^E$ and pass them to the second pointer network to obtain ${s}_i^2$ and ${e}_i^2$, which represent the normalized probabilities of the $i$th source word being the start and end of the second entity. These normalized probabilities are used to find the vector representation of the two entities, $\mathbf {a}_t^1$ and $\mathbf {a}_t^2$.
We concatenate the entity vector representations $\mathbf {a}_t^1$ and $\mathbf {a}_t^2$ with $\mathbf {h}_t^D$ and pass it to a feed-forward network (FFN) with softmax to find the relation. This feed-forward layer has a weight matrix $\mathbf {W}_r \in \mathbb {R}^{\vert R \vert \times (8d_p + d_h)}$ and a bias vector $\mathbf {b}_r \in \mathbb {R}^{\vert R \vert }$.
$\mathbf {r}_t$ represents the normalized probabilities of the relation at time step $t$. The relation embedding vector $\mathbf {z}_t$ is obtained using $\mathrm {argmax}$ of $\mathbf {r}_t$ and $\mathbf {E}_r$. $\mathbf {y}_t \in \mathbb {R}^{(8d_p + d_r)}$ is the vector representation of the tuple predicted at time step $t$. During training, we pass the embedding vector of the gold label relation in place of the predicted relation. So the $\mathrm {argmax}$ function does not affect the back-propagation during training. The decoder stops the sequence generation process when the predicted relation is EOS. This is the classification network of the decoder.
During inference, we select the start and end location of the two entities such that the product of the four pointer probabilities is maximized keeping the constraints that the two entities do not overlap with each other and $1 \le b \le e \le n$ where $b$ and $e$ are the start and end location of the corresponding entities. We first choose the start and end location of entity 1 based on the maximum product of the corresponding start and end pointer probabilities. Then we find entity 2 in a similar way excluding the span of entity 1 to avoid overlap. The same procedure is repeated but this time we first find entity 2 followed by entity 1. We choose that pair of entities which gives the higher product of four pointer probabilities between these two choices. This model is referred to as PtrNetDecoding (PNDec) henceforth.
Encoder-Decoder Architecture ::: Attention Modeling
We experimented with three different attention mechanisms for our word-level decoding model to obtain the source context vector $\mathbf {e}_t$:
(1) Avg.: The context vector is obtained by averaging the hidden vectors of the encoder: $\mathbf {e}_t=\frac{1}{n}\sum _{i=1}^n \mathbf {h}_i^E$
(2) N-gram: The context vector is obtained by the N-gram attention mechanism of BIBREF9 (BIBREF9) with N=3.
$\textnormal {a}_i^g=(\mathbf {h}_n^{E})^T \mathbf {V}^g \mathbf {w}_i^g$, $\alpha ^g = \mathrm {softmax}(\mathbf {a}^g)$
$\mathbf {e}_t=[\mathbf {h}_n^E \Vert \sum _{g=1}^N \mathbf {W}^g (\sum _{i=1}^{\vert G^g \vert } \alpha _i^g \mathbf {w}_i^g)$]
Here, $\mathbf {h}_n^E$ is the last hidden state of the encoder, $g \in \lbrace 1, 2, 3\rbrace $ refers to the word gram combination, $G^g$ is the sequence of g-gram word representations for the input sentence, $\mathbf {w}_i^g$ is the $i$th g-gram vector (2-gram and 3-gram representations are obtained by average pooling), $\alpha _i^g$ is the normalized attention score for the $i$th g-gram vector, $\mathbf {W} \in \mathbb {R}^{d_h \times d_h}$ and $\mathbf {V} \in \mathbb {R}^{d_h \times d_h}$ are trainable parameters.
(3) Single: The context vector is obtained by the attention mechanism proposed by BIBREF10 (BIBREF10). This attention mechanism gives the best performance with the word-level decoding model.
$\mathbf {u}_t^i = \mathbf {W}_{u} \mathbf {h}_i^E, \quad \mathbf {q}_t^i = \mathbf {W}_{q} \mathbf {h}_{t-1}^D + \mathbf {b}_{q}$,
$\textnormal {a}_t^i = \mathbf {v}_a \tanh (\mathbf {q}_t^i + \mathbf {u}_t^i), \quad \alpha _t = \mathrm {softmax}(\mathbf {a}_t)$,
$\mathbf {e}_t = \sum _{i=1}^n \alpha _t^i \mathbf {h}_i^E$
where $\mathbf {W}_u \in \mathbb {R}^{d_h \times d_h}$, $\mathbf {W}_q \in \mathbb {R}^{d_h \times d_h}$, and $\mathbf {v}_a \in \mathbb {R}^{d_h}$ are all trainable attention parameters and $\mathbf {b}_q \in \mathbb {R}^{d_h}$ is a bias vector. $\alpha _t^i$ is the normalized attention score of the $i$th source word at the decoding time step $t$.
For our pointer network-based decoding model, we use three variants of the single attention model. First, we use $\mathbf {h}_{t-1}^D$ to calculate $\mathbf {q}_t^i$ in the attention mechanism. Next, we use $\mathbf {y}_{prev}$ to calculate $\mathbf {q}_t^i$, where $\mathbf {W}_q \in \mathbb {R}^{(8d_p + d_r) \times d_h}$. In the final variant, we obtain the attentive context vector by concatenating the two attentive vectors obtained using $\mathbf {h}_{t-1}^D$ and $\mathbf {y}_{prev}$. This gives the best performance with the pointer network-based decoding model. These variants are referred to as $\mathrm {dec_{hid}}$, $\mathrm {tup_{prev}}$, and $\mathrm {combo}$ in Table TABREF17.
Encoder-Decoder Architecture ::: Loss Function
We minimize the negative log-likelihood loss of the generated words for word-level decoding ($\mathcal {L}_{word}$) and minimize the sum of negative log-likelihood loss of relation classification and the four pointer locations for pointer network-based decoding ($\mathcal {L}_{ptr}$).
$v_t^b$ is the softmax score of the target word at time step $t$ for the word-level decoding model. $r$, $s$, and $e$ are the softmax score of the corresponding true relation label, true start and end pointer location of an entity. $b$, $t$, and $c$ refer to the $b$th training instance, $t$th time step of decoding, and the two entities of a tuple respectively. $B$ and $T$ are the batch size and maximum time step of the decoder respectively.
Experiments ::: Datasets
We focus on the task of extracting multiple tuples with overlapping entities from sentences. We choose the New York Times (NYT) corpus for our experiments. This corpus has multiple versions, and we choose the following two versions as their test dataset has significantly larger number of instances of multiple relation tuples with overlapping entities. (i) The first version is used by BIBREF6 (BIBREF6) (mentioned as NYT in their paper) and has 24 relations. We name this version as NYT24. (ii) The second version is used by BIBREF11 (BIBREF11) (mentioned as NYT10 in their paper) and has 29 relations. We name this version as NYT29. We select 10% of the original training data and use it as the validation dataset. The remaining 90% is used for training. We include statistics of the training and test datasets in Table TABREF11.
Experiments ::: Parameter Settings
We run the Word2Vec BIBREF12 tool on the NYT corpus to initialize the word embeddings. The character embeddings and relation embeddings are initialized randomly. All embeddings are updated during training. We set the word embedding dimension $d_w=300$, relation embedding dimension $d_r=300$, character embedding dimension $d_c=50$, and character-based word feature dimension $d_f=50$. To extract the character-based word feature vector, we set the CNN filter width at 3 and the maximum length of a word at 10. The hidden dimension $d_h$ of the decoder LSTM cell is set at 300 and the hidden dimension of the forward and the backward LSTM of the encoder is set at 150. The hidden dimension of the forward and backward LSTM of the pointer networks is set at $d_p=300$. The model is trained with mini-batch size of 32 and the network parameters are optimized using Adam BIBREF13. Dropout layers with a dropout rate fixed at $0.3$ are used in our network to avoid overfitting.
Experiments ::: Baselines and Evaluation Metrics
We compare our model with the following state-of-the-art joint entity and relation extraction models:
(1) SPTree BIBREF4: This is an end-to-end neural entity and relation extraction model using sequence LSTM and Tree LSTM. Sequence LSTM is used to identify all the entities first and then Tree LSTM is used to find the relation between all pairs of entities.
(2) Tagging BIBREF5: This is a neural sequence tagging model which jointly extracts the entities and relations using an LSTM encoder and an LSTM decoder. They used a Cartesian product of entity tags and relation tags to encode the entity and relation information together. This model does not work when tuples have overlapping entities.
(3) CopyR BIBREF6: This model uses an encoder-decoder approach for joint extraction of entities and relations. It copies only the last token of an entity from the source sentence. Their best performing multi-decoder model is trained with a fixed number of decoders where each decoder extracts one tuple.
(4) HRL BIBREF11: This model uses a reinforcement learning (RL) algorithm with two levels of hierarchy for tuple extraction. A high-level RL finds the relation and a low-level RL identifies the two entities using a sequence tagging approach. This sequence tagging approach cannot always ensure extraction of exactly two entities.
(5) GraphR BIBREF14: This model considers each token in a sentence as a node in a graph, and edges connecting the nodes as relations between them. They use graph convolution network (GCN) to predict the relations of every edge and then filter out some of the relations.
(6) N-gram Attention BIBREF9: This model uses an encoder-decoder approach with N-gram attention mechanism for knowledge-base completion using distantly supervised data. The encoder uses the source tokens as its vocabulary and the decoder uses the entire Wikidata BIBREF15 entity IDs and relation IDs as its vocabulary. The encoder takes the source sentence as input and the decoder outputs the two entity IDs and relation ID for every tuple. During training, it uses the mapping of entity names and their Wikidata IDs of the entire Wikidata for proper alignment. Our task of extracting relation tuples with the raw entity names from a sentence is more challenging since entity names are not of fixed length. Our more generic approach is also helpful for extracting new entities which are not present in the existing knowledge bases such as Wikidata. We use their N-gram attention mechanism in our model to compare its performance with other attention models (Table TABREF17).
We use the same evaluation method used by BIBREF11 (BIBREF11) in their experiments. We consider the extracted tuples as a set and remove the duplicate tuples. An extracted tuple is considered as correct if the corresponding full entity names are correct and the relation is also correct. We report precision, recall, and F1 score for comparison.
Experiments ::: Experimental Results
Among the baselines, HRL achieves significantly higher F1 scores on the two datasets. We run their model and our models five times and report the median results in Table TABREF15. Scores of other baselines in Table TABREF15 are taken from previous published papers BIBREF6, BIBREF11, BIBREF14. Our WordDecoding (WDec) model achieves F1 scores that are $3.9\%$ and $4.1\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. Similarly, our PtrNetDecoding (PNDec) model achieves F1 scores that are $3.0\%$ and $1.3\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. We perform a statistical significance test (t-test) under a bootstrap pairing between HRL and our models and see that the higher F1 scores achieved by our models are statistically significant ($p < 0.001$). Next, we combine the outputs of five runs of our models and five runs of HRL to build ensemble models. For a test instance, we include those tuples which are extracted in the majority ($\ge 3$) of the five runs. This ensemble mechanism increases the precision significantly on both datasets with a small improvement in recall as well. In the ensemble scenario, compared to HRL, WDec achieves $4.2\%$ and $3.5\%$ higher F1 scores and PNDec achieves $4.2\%$ and $2.9\%$ higher F1 scores on the NYT29 and NYT24 datasets respectively.
Analysis and Discussion ::: Ablation Studies
We include the performance of different attention mechanisms with our WordDecoding model, effects of our masking-based copy mechanism, and ablation results of three variants of the single attention mechanism with our PtrNetDecoding model in Table TABREF17. WordDecoding with single attention achieves the highest F1 score on both datasets. We also see that our copy mechanism improves F1 scores by around 4–7% in each attention mechanism with both datasets. PtrNetDecoding achieves the highest F1 scores when we combine the two attention mechanisms with respect to the previous hidden vector of the decoder LSTM ($\mathbf {h}_{t-1}^D$) and representation of all previously extracted tuples ($\mathbf {y}_{prev}$).
Analysis and Discussion ::: Performance Analysis
From Table TABREF15, we see that CopyR, HRL, and our models achieve significantly higher F1 scores on the NYT24 dataset than the NYT29 dataset. Both datasets have a similar set of relations and similar texts (NYT). So task-wise both datasets should pose a similar challenge. However, the F1 scores suggest that the NYT24 dataset is easier than NYT29. The reason is that NYT24 has around 72.0% of overlapping tuples between the training and test data (% of test tuples that appear in the training data with different source sentences). In contrast, NYT29 has only 41.7% of overlapping tuples. Due to the memorization power of deep neural networks, it can achieve much higher F1 score on NYT24. The difference between the F1 scores of WordDecoding and PtrNetDecoding on NYT24 is marginally higher than NYT29, since WordDecoding has more trainable parameters (about 27 million) than PtrNetDecoding (about 24.5 million) and NYT24 has very high tuple overlap. However, their ensemble versions achieve closer F1 scores on both datasets.
Despite achieving marginally lower F1 scores, the pointer network-based model can be considered more intuitive and suitable for this task. WordDecoding may not extract the special tokens and relation tokens at the right time steps, which is critical for finding the tuples from the generated sequence of words. PtrNetDecoding always extracts two entities of varying length and a relation for every tuple. We also observe that PtrNetDecoding is more than two times faster and takes one-third of the GPU memory of WordDecoding during training and inference. This speedup and smaller memory consumption are achieved due to the fewer number of decoding steps of PtrNetDecoding compared to WordDecoding. PtrNetDecoding extracts an entire tuple at each time step, whereas WordDecoding extracts just one word at each time step and so requires eight time steps on average to extract a tuple (assuming that the average length of an entity is two). The softmax operation at the projection layer of WordDecoding is applied across the entire vocabulary and the vocabulary size can be large (more than 40,000 for our datasets). In case of PtrNetDecoding, the softmax operation is applied across the sentence length (maximum of 100 in our experiments) and across the relation set (24 and 29 for our datasets). The costly softmax operation and the higher number of decoding time steps significantly increase the training and inference time for WordDecoding. The encoder-decoder model proposed by BIBREF9 (BIBREF9) faces a similar softmax-related problem as their target vocabulary contains the entire Wikidata entity IDs and relation IDs which is in the millions. HRL, which uses a deep reinforcement learning algorithm, takes around 8x more time to train than PtrNetDecoding with a similar GPU configuration. The speedup and smaller memory consumption will be useful when we move from sentence-level extraction to document-level extraction, since document length is much higher than sentence length and a document contains a higher number of tuples.
Analysis and Discussion ::: Error Analysis
The relation tuples extracted by a joint model can be erroneous for multiple reasons such as: (i) extracted entities are wrong; (ii) extracted relations are wrong; (iii) pairings of entities with relations are wrong. To see the effects of the first two reasons, we analyze the performance of HRL and our models on entity generation and relation generation separately. For entity generation, we only consider those entities which are part of some tuple. For relation generation, we only consider the relations of the tuples. We include the performance of our two models and HRL on entity generation and relation generation in Table TABREF20. Our proposed models perform better than HRL on both tasks. Comparing our two models, PtrNetDecoding performs better than WordDecoding on both tasks, although WordDecoding achieves higher F1 scores in tuple extraction. This suggests that PtrNetDecoding makes more errors while pairing the entities with relations. We further analyze the outputs of our models and HRL to determine the errors due to ordering of entities (Order), mismatch of the first entity (Ent1), and mismatch of the second entity (Ent2) in Table TABREF21. WordDecoding generates fewer errors than the other two models in all the categories and thus achieves the highest F1 scores on both datasets.
Related Work
Traditionally, researchers BIBREF0, BIBREF1, BIBREF2, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25 used a pipeline approach for relation tuple extraction where relations were identified using a classification network after all entities were detected. BIBREF26 (BIBREF26) used an encoder-decoder model to extract multiple relations present between two given entities.
Recently, some researchers BIBREF3, BIBREF4, BIBREF27, BIBREF28 tried to bring these two tasks closer together by sharing their parameters and optimizing them together. BIBREF5 (BIBREF5) used a sequence tagging scheme to jointly extract the entities and relations. BIBREF6 (BIBREF6) proposed an encoder-decoder model with copy mechanism to extract relation tuples with overlapping entities. BIBREF11 (BIBREF11) proposed a joint extraction model based on reinforcement learning (RL). BIBREF14 (BIBREF14) used a graph convolution network (GCN) where they treated each token in a sentence as a node in a graph and edges were considered as relations. BIBREF9 (BIBREF9) used an N-gram attention mechanism with an encoder-decoder model for completion of knowledge bases using distant supervised data.
Encoder-decoder models have been used for many NLP applications such as neural machine translation BIBREF29, BIBREF10, BIBREF30, sentence generation from structured data BIBREF31, BIBREF32, and open information extraction BIBREF33, BIBREF34. Pointer networks BIBREF35 have been used to extract a text span from text for tasks such as question answering BIBREF36, BIBREF37. For the first time, we use pointer networks with an encoder-decoder model to extract relation tuples from sentences.
Conclusion
Extracting relation tuples from sentences is a challenging task due to different length of entities, the presence of multiple tuples, and overlapping of entities among tuples. In this paper, we propose two novel approaches using encoder-decoder architecture to address this task. Experiments on the New York Times (NYT) corpus show that our proposed models achieve significantly improved new state-of-the-art F1 scores. As future work, we would like to explore our proposed models for a document-level tuple extraction task.
Acknowledgments
We would like to thank the anonymous reviewers for their valuable and constructive comments on this paper. | WordDecoding (WDec) model |
56123dd42cf5c77fc9a88fc311ed2e1eb672126e | 56123dd42cf5c77fc9a88fc311ed2e1eb672126e_0 | Q: What is previous work authors reffer to?
Text: Introduction
Distantly-supervised information extraction systems extract relation tuples with a set of pre-defined relations from text. Traditionally, researchers BIBREF0, BIBREF1, BIBREF2 use pipeline approaches where a named entity recognition (NER) system is used to identify the entities in a sentence and then a classifier is used to find the relation (or no relation) between them. However, due to the complete separation of entity detection and relation classification, these models miss the interaction between multiple relation tuples present in a sentence.
Recently, several neural network-based models BIBREF3, BIBREF4 were proposed to jointly extract entities and relations from a sentence. These models used a parameter-sharing mechanism to extract the entities and relations in the same network. But they still find the relations after identifying all the entities and do not fully capture the interaction among multiple tuples. BIBREF5 (BIBREF5) proposed a joint extraction model based on neural sequence tagging scheme. But their model could not extract tuples with overlapping entities in a sentence as it could not assign more than one tag to a word. BIBREF6 (BIBREF6) proposed a neural encoder-decoder model for extracting relation tuples with overlapping entities. However, they used a copy mechanism to copy only the last token of the entities, thus this model could not extract the full entity names. Also, their best performing model used a separate decoder to extract each tuple which limited the power of their model. This model was trained with a fixed number of decoders and could not extract tuples beyond that number during inference. Encoder-decoder models are powerful models and they are successful in many NLP tasks such as machine translation, sentence generation from structured data, and open information extraction.
In this paper, we explore how encoder-decoder models can be used effectively for extracting relation tuples from sentences. There are three major challenges in this task: (i) The model should be able to extract entities and relations together. (ii) It should be able to extract multiple tuples with overlapping entities. (iii) It should be able to extract exactly two entities of a tuple with their full names. To address these challenges, we propose two novel approaches using encoder-decoder architecture. We first propose a new representation scheme for relation tuples (Table TABREF1) such that it can represent multiple tuples with overlapping entities and different lengths of entities in a simple way. We employ an encoder-decoder model where the decoder extracts one word at a time like machine translation models. At the end of sequence generation, due to the unique representation of the tuples, we can extract the tuples from the sequence of words. Although this model performs quite well, generating one word at a time is somewhat unnatural for this task. Each tuple has exactly two entities and one relation, and each entity appears as a continuous text span in a sentence. The most effective way to identify them is to find their start and end location in the sentence. Each relation tuple can then be represented using five items: start and end location of the two entities and the relation between them (see Table TABREF1). Keeping this in mind, we propose a pointer network-based decoding framework. This decoder consists of two pointer networks which find the start and end location of the two entities in a sentence, and a classification network which identifies the relation between them. At every time step of the decoding, this decoder extracts an entire relation tuple, not just a word. Experiments on the New York Times (NYT) datasets show that our approaches work effectively for this task and achieve state-of-the-art performance. To summarize, the contributions of this paper are as follows:
(1) We propose a new representation scheme for relation tuples such that an encoder-decoder model, which extracts one word at each time step, can still find multiple tuples with overlapping entities and tuples with multi-token entities from sentences. We also propose a masking-based copy mechanism to extract the entities from the source sentence only.
(2) We propose a modification in the decoding framework with pointer networks to make the encoder-decoder model more suitable for this task. At every time step, this decoder extracts an entire relation tuple, not just a word. This new decoding framework helps in speeding up the training process and uses less resources (GPU memory). This will be an important factor when we move from sentence-level tuple extraction to document-level extraction.
(3) Experiments on the NYT datasets show that our approaches outperform all the previous state-of-the-art models significantly and set a new benchmark on these datasets.
Task Description
A relation tuple consists of two entities and a relation. Such tuples can be found in sentences where an entity is a text span in a sentence and a relation comes from a pre-defined set $R$. These tuples may share one or both entities among them. Based on this, we divide the sentences into three classes: (i) No Entity Overlap (NEO): A sentence in this class has one or more tuples, but they do not share any entities. (ii) Entity Pair Overlap (EPO): A sentence in this class has more than one tuple, and at least two tuples share both the entities in the same or reverse order. (iii) Single Entity Overlap (SEO): A sentence in this class has more than one tuple and at least two tuples share exactly one entity. It should be noted that a sentence can belong to both EPO and SEO classes. Our task is to extract all relation tuples present in a sentence.
Encoder-Decoder Architecture
In this task, input to the system is a sequence of words, and output is a set of relation tuples. In our first approach, we represent each tuple as entity1 ; entity2 ; relation. We use `;' as a separator token to separate the tuple components. Multiple tuples are separated using the `$\vert $' token. We have included one example of such representation in Table TABREF1. Multiple relation tuples with overlapping entities and different lengths of entities can be represented in a simple way using these special tokens (; and $\vert $). During inference, after the end of sequence generation, relation tuples can be extracted easily using these special tokens. Due to this uniform representation scheme, where entity tokens, relation tokens, and special tokens are treated similarly, we use a shared vocabulary between the encoder and decoder which includes all of these tokens. The input sentence contains clue words for every relation which can help generate the relation tokens. We use two special tokens so that the model can distinguish between the beginning of a relation tuple and the beginning of a tuple component. To extract the relation tuples from a sentence using the encoder-decoder model, the model has to generate the entity tokens, find relation clue words and map them to the relation tokens, and generate the special tokens at appropriate time. Our experiments show that the encoder-decoder models can achieve this quite effectively.
Encoder-Decoder Architecture ::: Embedding Layer & Encoder
We create a single vocabulary $V$ consisting of the source sentence tokens, relation names from relation set $R$, special separator tokens (`;', `$\vert $'), start-of-target-sequence token (SOS), end-of-target-sequence token (EOS), and unknown word token (UNK). Word-level embeddings are formed by two components: (1) pre-trained word vectors (2) character embedding-based feature vectors. We use a word embedding layer $\mathbf {E}_w \in \mathbb {R}^{\vert V \vert \times d_w}$ and a character embedding layer $\mathbf {E}_c \in \mathbb {R}^{\vert A \vert \times d_c}$, where $d_w$ is the dimension of word vectors, $A$ is the character alphabet of input sentence tokens, and $d_c$ is the dimension of character embedding vectors. Following BIBREF7 (BIBREF7), we use a convolutional neural network with max-pooling to extract a feature vector of size $d_f$ for every word. Word embeddings and character embedding-based feature vectors are concatenated ($\Vert $) to obtain the representation of the input tokens.
A source sentence $\mathbf {S}$ is represented by vectors of its tokens $\mathbf {x}_1, \mathbf {x}_2,....,\mathbf {x}_n$, where $\mathbf {x}_i \in \mathbb {R}^{(d_w+d_f)}$ is the vector representation of the $i$th word and $n$ is the length of $\mathbf {S}$. These vectors $\mathbf {x}_i$ are passed to a bi-directional LSTM BIBREF8 (Bi-LSTM) to obtain the hidden representation $\mathbf {h}_i^E$. We set the hidden dimension of the forward and backward LSTM of the Bi-LSTM to be $d_h/2$ to obtain $\mathbf {h}_i^E \in \mathbb {R}^{d_h}$, where $d_h$ is the hidden dimension of the sequence generator LSTM of the decoder described below.
Encoder-Decoder Architecture ::: Word-level Decoder & Copy Mechanism
A target sequence $\mathbf {T}$ is represented by only word embedding vectors of its tokens $\mathbf {y}_0, \mathbf {y}_1,....,\mathbf {y}_m$ where $\mathbf {y}_i \in \mathbb {R}^{d_w}$ is the embedding vector of the $i$th token and $m$ is the length of the target sequence. $\mathbf {y}_0$ and $\mathbf {y}_m$ represent the embedding vector of the SOS and EOS token respectively. The decoder generates one token at a time and stops when EOS is generated. We use an LSTM as the decoder and at time step $t$, the decoder takes the source sentence encoding ($\mathbf {e}_t \in \mathbb {R}^{d_h}$) and the previous target word embedding ($\mathbf {y}_{t-1}$) as the input and generates the hidden representation of the current token ($\mathbf {h}_t^D \in \mathbb {R}^{d_h}$). The sentence encoding vector $\mathbf {e}_t$ can be obtained using attention mechanism. $\mathbf {h}_t^D$ is projected to the vocabulary $V$ using a linear layer with weight matrix $\mathbf {W}_v \in \mathbb {R}^{\vert V \vert \times d_h}$ and bias vector $\mathbf {b}_v \in \mathbb {R}^{\vert V \vert }$ (projection layer).
$\mathbf {o}_t$ represents the normalized scores of all the words in the embedding vocabulary at time step $t$. $\mathbf {h}_{t-1}^D$ is the previous hidden state of the LSTM.
The projection layer of the decoder maps the decoder output to the entire vocabulary. During training, we use the gold label target tokens directly. However, during inference, the decoder may predict a token from the vocabulary which is not present in the current sentence or the set of relations or the special tokens. To prevent this, we use a masking technique while applying the softmax operation at the projection layer. We mask (exclude) all words of the vocabulary except the current source sentence tokens, relation tokens, separator tokens (`;', `$\vert $'), UNK, and EOS tokens in the softmax operation. To mask (exclude) some word from softmax, we set the corresponding value in $\hat{\mathbf {o}}_t$ at $-\infty $ and the corresponding softmax score will be zero. This ensures the copying of entities from the source sentence only. We include the UNK token in the softmax operation to make sure that the model generates new entities during inference. If the decoder predicts an UNK token, we replace it with the corresponding source word which has the highest attention score. During inference, after decoding is finished, we extract all tuples based on the special tokens, remove duplicate tuples and tuples in which both entities are the same or tuples where the relation token is not from the relation set. This model is referred to as WordDecoding (WDec) henceforth.
Encoder-Decoder Architecture ::: Pointer Network-Based Decoder
In the second approach, we identify the entities in the sentence using their start and end locations. We remove the special tokens and relation names from the word vocabulary and word embeddings are used only at the encoder side along with character embeddings. We use an additional relation embedding matrix $\mathbf {E}_r \in \mathbb {R}^{\vert R \vert \times d_r}$ at the decoder side of our model, where $R$ is the set of relations and $d_r$ is the dimension of relation vectors. The relation set $R$ includes a special relation token EOS which indicates the end of the sequence. Relation tuples are represented as a sequence $T=y_0, y_1,....,y_m$, where $y_t$ is a tuple consisting of four indexes in the source sentence indicating the start and end location of the two entities and a relation between them (see Table TABREF1). $y_0$ is a dummy tuple that represents the start tuple of the sequence and $y_m$ functions as the end tuple of the sequence which has EOS as the relation (entities are ignored for this tuple). The decoder consists of an LSTM with hidden dimension $d_h$ to generate the sequence of tuples, two pointer networks to find the two entities, and a classification network to find the relation of a tuple. At time step $t$, the decoder takes the source sentence encoding ($\mathbf {e}_t \in \mathbb {R}^{d_h}$) and the representation of all previously generated tuples ($\mathbf {y}_{prev}=\sum _{j=0}^{t-1}\mathbf {y}_{j}$) as the input and generates the hidden representation of the current tuple, $\mathbf {h}_t^D \in \mathbb {R}^{d_h}$. The sentence encoding vector $\mathbf {e}_t$ is obtained using an attention mechanism as explained later. Relation tuples are a set and to prevent the decoder from generating the same tuple again, we pass the information about all previously generated tuples at each time step of decoding. $\mathbf {y}_j$ is the vector representation of the tuple predicted at time step $j < t$ and we use the zero vector ($\mathbf {y}_0=\overrightarrow{0}$) to represent the dummy tuple $y_0$. $\mathbf {h}_{t-1}^D$ is the hidden state of the LSTM at time step $t-1$.
Encoder-Decoder Architecture ::: Relation Tuple Extraction
After obtaining the hidden representation of the current tuple $\mathbf {h}_t^D$, we first find the start and end pointers of the two entities in the source sentence. We concatenate the vector $\mathbf {h}_t^D$ with the hidden vectors $\mathbf {h}_i^E$ of the encoder and pass them to a Bi-LSTM layer with hidden dimension $d_p$ for forward and backward LSTM. The hidden vectors of this Bi-LSTM layer $\mathbf {h}_i^k \in \mathbb {R}^{2d_p}$ are passed to two feed-forward networks (FFN) with softmax to convert each hidden vector into two scalar values between 0 and 1. Softmax operation is applied across all the words in the input sentence. These two scalar values represent the probability of the corresponding source sentence token to be the start and end location of the first entity. This Bi-LSTM layer with the two feed-forward layers is the first pointer network which identifies the first entity of the current relation tuple.
where $\mathbf {W}_s^1 \in \mathbb {R}^{1 \times 2d_p}$, $\mathbf {W}_e^1 \in \mathbb {R}^{1 \times 2d_p}$, ${b}_s^1$, and ${b}_e^1$ are the weights and bias parameters of the feed-forward layers. ${s}_i^1$, ${e}_i^1$ represent the normalized probabilities of the $i$th source word being the start and end token of the first entity of the predicted tuple. We use another pointer network to extract the second entity of the tuple. We concatenate the hidden vectors $\mathbf {h}_i^k$ with $\mathbf {h}_t^D$ and $\mathbf {h}_i^E$ and pass them to the second pointer network to obtain ${s}_i^2$ and ${e}_i^2$, which represent the normalized probabilities of the $i$th source word being the start and end of the second entity. These normalized probabilities are used to find the vector representation of the two entities, $\mathbf {a}_t^1$ and $\mathbf {a}_t^2$.
We concatenate the entity vector representations $\mathbf {a}_t^1$ and $\mathbf {a}_t^2$ with $\mathbf {h}_t^D$ and pass it to a feed-forward network (FFN) with softmax to find the relation. This feed-forward layer has a weight matrix $\mathbf {W}_r \in \mathbb {R}^{\vert R \vert \times (8d_p + d_h)}$ and a bias vector $\mathbf {b}_r \in \mathbb {R}^{\vert R \vert }$.
$\mathbf {r}_t$ represents the normalized probabilities of the relation at time step $t$. The relation embedding vector $\mathbf {z}_t$ is obtained using $\mathrm {argmax}$ of $\mathbf {r}_t$ and $\mathbf {E}_r$. $\mathbf {y}_t \in \mathbb {R}^{(8d_p + d_r)}$ is the vector representation of the tuple predicted at time step $t$. During training, we pass the embedding vector of the gold label relation in place of the predicted relation. So the $\mathrm {argmax}$ function does not affect the back-propagation during training. The decoder stops the sequence generation process when the predicted relation is EOS. This is the classification network of the decoder.
During inference, we select the start and end location of the two entities such that the product of the four pointer probabilities is maximized keeping the constraints that the two entities do not overlap with each other and $1 \le b \le e \le n$ where $b$ and $e$ are the start and end location of the corresponding entities. We first choose the start and end location of entity 1 based on the maximum product of the corresponding start and end pointer probabilities. Then we find entity 2 in a similar way excluding the span of entity 1 to avoid overlap. The same procedure is repeated but this time we first find entity 2 followed by entity 1. We choose that pair of entities which gives the higher product of four pointer probabilities between these two choices. This model is referred to as PtrNetDecoding (PNDec) henceforth.
Encoder-Decoder Architecture ::: Attention Modeling
We experimented with three different attention mechanisms for our word-level decoding model to obtain the source context vector $\mathbf {e}_t$:
(1) Avg.: The context vector is obtained by averaging the hidden vectors of the encoder: $\mathbf {e}_t=\frac{1}{n}\sum _{i=1}^n \mathbf {h}_i^E$
(2) N-gram: The context vector is obtained by the N-gram attention mechanism of BIBREF9 (BIBREF9) with N=3.
$\textnormal {a}_i^g=(\mathbf {h}_n^{E})^T \mathbf {V}^g \mathbf {w}_i^g$, $\alpha ^g = \mathrm {softmax}(\mathbf {a}^g)$
$\mathbf {e}_t=[\mathbf {h}_n^E \Vert \sum _{g=1}^N \mathbf {W}^g (\sum _{i=1}^{\vert G^g \vert } \alpha _i^g \mathbf {w}_i^g)$]
Here, $\mathbf {h}_n^E$ is the last hidden state of the encoder, $g \in \lbrace 1, 2, 3\rbrace $ refers to the word gram combination, $G^g$ is the sequence of g-gram word representations for the input sentence, $\mathbf {w}_i^g$ is the $i$th g-gram vector (2-gram and 3-gram representations are obtained by average pooling), $\alpha _i^g$ is the normalized attention score for the $i$th g-gram vector, $\mathbf {W} \in \mathbb {R}^{d_h \times d_h}$ and $\mathbf {V} \in \mathbb {R}^{d_h \times d_h}$ are trainable parameters.
(3) Single: The context vector is obtained by the attention mechanism proposed by BIBREF10 (BIBREF10). This attention mechanism gives the best performance with the word-level decoding model.
$\mathbf {u}_t^i = \mathbf {W}_{u} \mathbf {h}_i^E, \quad \mathbf {q}_t^i = \mathbf {W}_{q} \mathbf {h}_{t-1}^D + \mathbf {b}_{q}$,
$\textnormal {a}_t^i = \mathbf {v}_a \tanh (\mathbf {q}_t^i + \mathbf {u}_t^i), \quad \alpha _t = \mathrm {softmax}(\mathbf {a}_t)$,
$\mathbf {e}_t = \sum _{i=1}^n \alpha _t^i \mathbf {h}_i^E$
where $\mathbf {W}_u \in \mathbb {R}^{d_h \times d_h}$, $\mathbf {W}_q \in \mathbb {R}^{d_h \times d_h}$, and $\mathbf {v}_a \in \mathbb {R}^{d_h}$ are all trainable attention parameters and $\mathbf {b}_q \in \mathbb {R}^{d_h}$ is a bias vector. $\alpha _t^i$ is the normalized attention score of the $i$th source word at the decoding time step $t$.
For our pointer network-based decoding model, we use three variants of the single attention model. First, we use $\mathbf {h}_{t-1}^D$ to calculate $\mathbf {q}_t^i$ in the attention mechanism. Next, we use $\mathbf {y}_{prev}$ to calculate $\mathbf {q}_t^i$, where $\mathbf {W}_q \in \mathbb {R}^{(8d_p + d_r) \times d_h}$. In the final variant, we obtain the attentive context vector by concatenating the two attentive vectors obtained using $\mathbf {h}_{t-1}^D$ and $\mathbf {y}_{prev}$. This gives the best performance with the pointer network-based decoding model. These variants are referred to as $\mathrm {dec_{hid}}$, $\mathrm {tup_{prev}}$, and $\mathrm {combo}$ in Table TABREF17.
Encoder-Decoder Architecture ::: Loss Function
We minimize the negative log-likelihood loss of the generated words for word-level decoding ($\mathcal {L}_{word}$) and minimize the sum of negative log-likelihood loss of relation classification and the four pointer locations for pointer network-based decoding ($\mathcal {L}_{ptr}$).
$v_t^b$ is the softmax score of the target word at time step $t$ for the word-level decoding model. $r$, $s$, and $e$ are the softmax score of the corresponding true relation label, true start and end pointer location of an entity. $b$, $t$, and $c$ refer to the $b$th training instance, $t$th time step of decoding, and the two entities of a tuple respectively. $B$ and $T$ are the batch size and maximum time step of the decoder respectively.
Experiments ::: Datasets
We focus on the task of extracting multiple tuples with overlapping entities from sentences. We choose the New York Times (NYT) corpus for our experiments. This corpus has multiple versions, and we choose the following two versions as their test dataset has significantly larger number of instances of multiple relation tuples with overlapping entities. (i) The first version is used by BIBREF6 (BIBREF6) (mentioned as NYT in their paper) and has 24 relations. We name this version as NYT24. (ii) The second version is used by BIBREF11 (BIBREF11) (mentioned as NYT10 in their paper) and has 29 relations. We name this version as NYT29. We select 10% of the original training data and use it as the validation dataset. The remaining 90% is used for training. We include statistics of the training and test datasets in Table TABREF11.
Experiments ::: Parameter Settings
We run the Word2Vec BIBREF12 tool on the NYT corpus to initialize the word embeddings. The character embeddings and relation embeddings are initialized randomly. All embeddings are updated during training. We set the word embedding dimension $d_w=300$, relation embedding dimension $d_r=300$, character embedding dimension $d_c=50$, and character-based word feature dimension $d_f=50$. To extract the character-based word feature vector, we set the CNN filter width at 3 and the maximum length of a word at 10. The hidden dimension $d_h$ of the decoder LSTM cell is set at 300 and the hidden dimension of the forward and the backward LSTM of the encoder is set at 150. The hidden dimension of the forward and backward LSTM of the pointer networks is set at $d_p=300$. The model is trained with mini-batch size of 32 and the network parameters are optimized using Adam BIBREF13. Dropout layers with a dropout rate fixed at $0.3$ are used in our network to avoid overfitting.
Experiments ::: Baselines and Evaluation Metrics
We compare our model with the following state-of-the-art joint entity and relation extraction models:
(1) SPTree BIBREF4: This is an end-to-end neural entity and relation extraction model using sequence LSTM and Tree LSTM. Sequence LSTM is used to identify all the entities first and then Tree LSTM is used to find the relation between all pairs of entities.
(2) Tagging BIBREF5: This is a neural sequence tagging model which jointly extracts the entities and relations using an LSTM encoder and an LSTM decoder. They used a Cartesian product of entity tags and relation tags to encode the entity and relation information together. This model does not work when tuples have overlapping entities.
(3) CopyR BIBREF6: This model uses an encoder-decoder approach for joint extraction of entities and relations. It copies only the last token of an entity from the source sentence. Their best performing multi-decoder model is trained with a fixed number of decoders where each decoder extracts one tuple.
(4) HRL BIBREF11: This model uses a reinforcement learning (RL) algorithm with two levels of hierarchy for tuple extraction. A high-level RL finds the relation and a low-level RL identifies the two entities using a sequence tagging approach. This sequence tagging approach cannot always ensure extraction of exactly two entities.
(5) GraphR BIBREF14: This model considers each token in a sentence as a node in a graph, and edges connecting the nodes as relations between them. They use graph convolution network (GCN) to predict the relations of every edge and then filter out some of the relations.
(6) N-gram Attention BIBREF9: This model uses an encoder-decoder approach with N-gram attention mechanism for knowledge-base completion using distantly supervised data. The encoder uses the source tokens as its vocabulary and the decoder uses the entire Wikidata BIBREF15 entity IDs and relation IDs as its vocabulary. The encoder takes the source sentence as input and the decoder outputs the two entity IDs and relation ID for every tuple. During training, it uses the mapping of entity names and their Wikidata IDs of the entire Wikidata for proper alignment. Our task of extracting relation tuples with the raw entity names from a sentence is more challenging since entity names are not of fixed length. Our more generic approach is also helpful for extracting new entities which are not present in the existing knowledge bases such as Wikidata. We use their N-gram attention mechanism in our model to compare its performance with other attention models (Table TABREF17).
We use the same evaluation method used by BIBREF11 (BIBREF11) in their experiments. We consider the extracted tuples as a set and remove the duplicate tuples. An extracted tuple is considered as correct if the corresponding full entity names are correct and the relation is also correct. We report precision, recall, and F1 score for comparison.
Experiments ::: Experimental Results
Among the baselines, HRL achieves significantly higher F1 scores on the two datasets. We run their model and our models five times and report the median results in Table TABREF15. Scores of other baselines in Table TABREF15 are taken from previous published papers BIBREF6, BIBREF11, BIBREF14. Our WordDecoding (WDec) model achieves F1 scores that are $3.9\%$ and $4.1\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. Similarly, our PtrNetDecoding (PNDec) model achieves F1 scores that are $3.0\%$ and $1.3\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. We perform a statistical significance test (t-test) under a bootstrap pairing between HRL and our models and see that the higher F1 scores achieved by our models are statistically significant ($p < 0.001$). Next, we combine the outputs of five runs of our models and five runs of HRL to build ensemble models. For a test instance, we include those tuples which are extracted in the majority ($\ge 3$) of the five runs. This ensemble mechanism increases the precision significantly on both datasets with a small improvement in recall as well. In the ensemble scenario, compared to HRL, WDec achieves $4.2\%$ and $3.5\%$ higher F1 scores and PNDec achieves $4.2\%$ and $2.9\%$ higher F1 scores on the NYT29 and NYT24 datasets respectively.
Analysis and Discussion ::: Ablation Studies
We include the performance of different attention mechanisms with our WordDecoding model, effects of our masking-based copy mechanism, and ablation results of three variants of the single attention mechanism with our PtrNetDecoding model in Table TABREF17. WordDecoding with single attention achieves the highest F1 score on both datasets. We also see that our copy mechanism improves F1 scores by around 4–7% in each attention mechanism with both datasets. PtrNetDecoding achieves the highest F1 scores when we combine the two attention mechanisms with respect to the previous hidden vector of the decoder LSTM ($\mathbf {h}_{t-1}^D$) and representation of all previously extracted tuples ($\mathbf {y}_{prev}$).
Analysis and Discussion ::: Performance Analysis
From Table TABREF15, we see that CopyR, HRL, and our models achieve significantly higher F1 scores on the NYT24 dataset than the NYT29 dataset. Both datasets have a similar set of relations and similar texts (NYT). So task-wise both datasets should pose a similar challenge. However, the F1 scores suggest that the NYT24 dataset is easier than NYT29. The reason is that NYT24 has around 72.0% of overlapping tuples between the training and test data (% of test tuples that appear in the training data with different source sentences). In contrast, NYT29 has only 41.7% of overlapping tuples. Due to the memorization power of deep neural networks, it can achieve much higher F1 score on NYT24. The difference between the F1 scores of WordDecoding and PtrNetDecoding on NYT24 is marginally higher than NYT29, since WordDecoding has more trainable parameters (about 27 million) than PtrNetDecoding (about 24.5 million) and NYT24 has very high tuple overlap. However, their ensemble versions achieve closer F1 scores on both datasets.
Despite achieving marginally lower F1 scores, the pointer network-based model can be considered more intuitive and suitable for this task. WordDecoding may not extract the special tokens and relation tokens at the right time steps, which is critical for finding the tuples from the generated sequence of words. PtrNetDecoding always extracts two entities of varying length and a relation for every tuple. We also observe that PtrNetDecoding is more than two times faster and takes one-third of the GPU memory of WordDecoding during training and inference. This speedup and smaller memory consumption are achieved due to the fewer number of decoding steps of PtrNetDecoding compared to WordDecoding. PtrNetDecoding extracts an entire tuple at each time step, whereas WordDecoding extracts just one word at each time step and so requires eight time steps on average to extract a tuple (assuming that the average length of an entity is two). The softmax operation at the projection layer of WordDecoding is applied across the entire vocabulary and the vocabulary size can be large (more than 40,000 for our datasets). In case of PtrNetDecoding, the softmax operation is applied across the sentence length (maximum of 100 in our experiments) and across the relation set (24 and 29 for our datasets). The costly softmax operation and the higher number of decoding time steps significantly increase the training and inference time for WordDecoding. The encoder-decoder model proposed by BIBREF9 (BIBREF9) faces a similar softmax-related problem as their target vocabulary contains the entire Wikidata entity IDs and relation IDs which is in the millions. HRL, which uses a deep reinforcement learning algorithm, takes around 8x more time to train than PtrNetDecoding with a similar GPU configuration. The speedup and smaller memory consumption will be useful when we move from sentence-level extraction to document-level extraction, since document length is much higher than sentence length and a document contains a higher number of tuples.
Analysis and Discussion ::: Error Analysis
The relation tuples extracted by a joint model can be erroneous for multiple reasons such as: (i) extracted entities are wrong; (ii) extracted relations are wrong; (iii) pairings of entities with relations are wrong. To see the effects of the first two reasons, we analyze the performance of HRL and our models on entity generation and relation generation separately. For entity generation, we only consider those entities which are part of some tuple. For relation generation, we only consider the relations of the tuples. We include the performance of our two models and HRL on entity generation and relation generation in Table TABREF20. Our proposed models perform better than HRL on both tasks. Comparing our two models, PtrNetDecoding performs better than WordDecoding on both tasks, although WordDecoding achieves higher F1 scores in tuple extraction. This suggests that PtrNetDecoding makes more errors while pairing the entities with relations. We further analyze the outputs of our models and HRL to determine the errors due to ordering of entities (Order), mismatch of the first entity (Ent1), and mismatch of the second entity (Ent2) in Table TABREF21. WordDecoding generates fewer errors than the other two models in all the categories and thus achieves the highest F1 scores on both datasets.
Related Work
Traditionally, researchers BIBREF0, BIBREF1, BIBREF2, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25 used a pipeline approach for relation tuple extraction where relations were identified using a classification network after all entities were detected. BIBREF26 (BIBREF26) used an encoder-decoder model to extract multiple relations present between two given entities.
Recently, some researchers BIBREF3, BIBREF4, BIBREF27, BIBREF28 tried to bring these two tasks closer together by sharing their parameters and optimizing them together. BIBREF5 (BIBREF5) used a sequence tagging scheme to jointly extract the entities and relations. BIBREF6 (BIBREF6) proposed an encoder-decoder model with copy mechanism to extract relation tuples with overlapping entities. BIBREF11 (BIBREF11) proposed a joint extraction model based on reinforcement learning (RL). BIBREF14 (BIBREF14) used a graph convolution network (GCN) where they treated each token in a sentence as a node in a graph and edges were considered as relations. BIBREF9 (BIBREF9) used an N-gram attention mechanism with an encoder-decoder model for completion of knowledge bases using distant supervised data.
Encoder-decoder models have been used for many NLP applications such as neural machine translation BIBREF29, BIBREF10, BIBREF30, sentence generation from structured data BIBREF31, BIBREF32, and open information extraction BIBREF33, BIBREF34. Pointer networks BIBREF35 have been used to extract a text span from text for tasks such as question answering BIBREF36, BIBREF37. For the first time, we use pointer networks with an encoder-decoder model to extract relation tuples from sentences.
Conclusion
Extracting relation tuples from sentences is a challenging task due to different length of entities, the presence of multiple tuples, and overlapping of entities among tuples. In this paper, we propose two novel approaches using encoder-decoder architecture to address this task. Experiments on the New York Times (NYT) corpus show that our proposed models achieve significantly improved new state-of-the-art F1 scores. As future work, we would like to explore our proposed models for a document-level tuple extraction task.
Acknowledgments
We would like to thank the anonymous reviewers for their valuable and constructive comments on this paper. | SPTree, Tagging, CopyR, HRL, GraphR, N-gram Attention |
1898f999626f9a6da637bd8b4857e5eddf2fc729 | 1898f999626f9a6da637bd8b4857e5eddf2fc729_0 | Q: How higher are F1 scores compared to previous work?
Text: Introduction
Distantly-supervised information extraction systems extract relation tuples with a set of pre-defined relations from text. Traditionally, researchers BIBREF0, BIBREF1, BIBREF2 use pipeline approaches where a named entity recognition (NER) system is used to identify the entities in a sentence and then a classifier is used to find the relation (or no relation) between them. However, due to the complete separation of entity detection and relation classification, these models miss the interaction between multiple relation tuples present in a sentence.
Recently, several neural network-based models BIBREF3, BIBREF4 were proposed to jointly extract entities and relations from a sentence. These models used a parameter-sharing mechanism to extract the entities and relations in the same network. But they still find the relations after identifying all the entities and do not fully capture the interaction among multiple tuples. BIBREF5 (BIBREF5) proposed a joint extraction model based on neural sequence tagging scheme. But their model could not extract tuples with overlapping entities in a sentence as it could not assign more than one tag to a word. BIBREF6 (BIBREF6) proposed a neural encoder-decoder model for extracting relation tuples with overlapping entities. However, they used a copy mechanism to copy only the last token of the entities, thus this model could not extract the full entity names. Also, their best performing model used a separate decoder to extract each tuple which limited the power of their model. This model was trained with a fixed number of decoders and could not extract tuples beyond that number during inference. Encoder-decoder models are powerful models and they are successful in many NLP tasks such as machine translation, sentence generation from structured data, and open information extraction.
In this paper, we explore how encoder-decoder models can be used effectively for extracting relation tuples from sentences. There are three major challenges in this task: (i) The model should be able to extract entities and relations together. (ii) It should be able to extract multiple tuples with overlapping entities. (iii) It should be able to extract exactly two entities of a tuple with their full names. To address these challenges, we propose two novel approaches using encoder-decoder architecture. We first propose a new representation scheme for relation tuples (Table TABREF1) such that it can represent multiple tuples with overlapping entities and different lengths of entities in a simple way. We employ an encoder-decoder model where the decoder extracts one word at a time like machine translation models. At the end of sequence generation, due to the unique representation of the tuples, we can extract the tuples from the sequence of words. Although this model performs quite well, generating one word at a time is somewhat unnatural for this task. Each tuple has exactly two entities and one relation, and each entity appears as a continuous text span in a sentence. The most effective way to identify them is to find their start and end location in the sentence. Each relation tuple can then be represented using five items: start and end location of the two entities and the relation between them (see Table TABREF1). Keeping this in mind, we propose a pointer network-based decoding framework. This decoder consists of two pointer networks which find the start and end location of the two entities in a sentence, and a classification network which identifies the relation between them. At every time step of the decoding, this decoder extracts an entire relation tuple, not just a word. Experiments on the New York Times (NYT) datasets show that our approaches work effectively for this task and achieve state-of-the-art performance. To summarize, the contributions of this paper are as follows:
(1) We propose a new representation scheme for relation tuples such that an encoder-decoder model, which extracts one word at each time step, can still find multiple tuples with overlapping entities and tuples with multi-token entities from sentences. We also propose a masking-based copy mechanism to extract the entities from the source sentence only.
(2) We propose a modification in the decoding framework with pointer networks to make the encoder-decoder model more suitable for this task. At every time step, this decoder extracts an entire relation tuple, not just a word. This new decoding framework helps in speeding up the training process and uses less resources (GPU memory). This will be an important factor when we move from sentence-level tuple extraction to document-level extraction.
(3) Experiments on the NYT datasets show that our approaches outperform all the previous state-of-the-art models significantly and set a new benchmark on these datasets.
Task Description
A relation tuple consists of two entities and a relation. Such tuples can be found in sentences where an entity is a text span in a sentence and a relation comes from a pre-defined set $R$. These tuples may share one or both entities among them. Based on this, we divide the sentences into three classes: (i) No Entity Overlap (NEO): A sentence in this class has one or more tuples, but they do not share any entities. (ii) Entity Pair Overlap (EPO): A sentence in this class has more than one tuple, and at least two tuples share both the entities in the same or reverse order. (iii) Single Entity Overlap (SEO): A sentence in this class has more than one tuple and at least two tuples share exactly one entity. It should be noted that a sentence can belong to both EPO and SEO classes. Our task is to extract all relation tuples present in a sentence.
Encoder-Decoder Architecture
In this task, input to the system is a sequence of words, and output is a set of relation tuples. In our first approach, we represent each tuple as entity1 ; entity2 ; relation. We use `;' as a separator token to separate the tuple components. Multiple tuples are separated using the `$\vert $' token. We have included one example of such representation in Table TABREF1. Multiple relation tuples with overlapping entities and different lengths of entities can be represented in a simple way using these special tokens (; and $\vert $). During inference, after the end of sequence generation, relation tuples can be extracted easily using these special tokens. Due to this uniform representation scheme, where entity tokens, relation tokens, and special tokens are treated similarly, we use a shared vocabulary between the encoder and decoder which includes all of these tokens. The input sentence contains clue words for every relation which can help generate the relation tokens. We use two special tokens so that the model can distinguish between the beginning of a relation tuple and the beginning of a tuple component. To extract the relation tuples from a sentence using the encoder-decoder model, the model has to generate the entity tokens, find relation clue words and map them to the relation tokens, and generate the special tokens at appropriate time. Our experiments show that the encoder-decoder models can achieve this quite effectively.
Encoder-Decoder Architecture ::: Embedding Layer & Encoder
We create a single vocabulary $V$ consisting of the source sentence tokens, relation names from relation set $R$, special separator tokens (`;', `$\vert $'), start-of-target-sequence token (SOS), end-of-target-sequence token (EOS), and unknown word token (UNK). Word-level embeddings are formed by two components: (1) pre-trained word vectors (2) character embedding-based feature vectors. We use a word embedding layer $\mathbf {E}_w \in \mathbb {R}^{\vert V \vert \times d_w}$ and a character embedding layer $\mathbf {E}_c \in \mathbb {R}^{\vert A \vert \times d_c}$, where $d_w$ is the dimension of word vectors, $A$ is the character alphabet of input sentence tokens, and $d_c$ is the dimension of character embedding vectors. Following BIBREF7 (BIBREF7), we use a convolutional neural network with max-pooling to extract a feature vector of size $d_f$ for every word. Word embeddings and character embedding-based feature vectors are concatenated ($\Vert $) to obtain the representation of the input tokens.
A source sentence $\mathbf {S}$ is represented by vectors of its tokens $\mathbf {x}_1, \mathbf {x}_2,....,\mathbf {x}_n$, where $\mathbf {x}_i \in \mathbb {R}^{(d_w+d_f)}$ is the vector representation of the $i$th word and $n$ is the length of $\mathbf {S}$. These vectors $\mathbf {x}_i$ are passed to a bi-directional LSTM BIBREF8 (Bi-LSTM) to obtain the hidden representation $\mathbf {h}_i^E$. We set the hidden dimension of the forward and backward LSTM of the Bi-LSTM to be $d_h/2$ to obtain $\mathbf {h}_i^E \in \mathbb {R}^{d_h}$, where $d_h$ is the hidden dimension of the sequence generator LSTM of the decoder described below.
Encoder-Decoder Architecture ::: Word-level Decoder & Copy Mechanism
A target sequence $\mathbf {T}$ is represented by only word embedding vectors of its tokens $\mathbf {y}_0, \mathbf {y}_1,....,\mathbf {y}_m$ where $\mathbf {y}_i \in \mathbb {R}^{d_w}$ is the embedding vector of the $i$th token and $m$ is the length of the target sequence. $\mathbf {y}_0$ and $\mathbf {y}_m$ represent the embedding vector of the SOS and EOS token respectively. The decoder generates one token at a time and stops when EOS is generated. We use an LSTM as the decoder and at time step $t$, the decoder takes the source sentence encoding ($\mathbf {e}_t \in \mathbb {R}^{d_h}$) and the previous target word embedding ($\mathbf {y}_{t-1}$) as the input and generates the hidden representation of the current token ($\mathbf {h}_t^D \in \mathbb {R}^{d_h}$). The sentence encoding vector $\mathbf {e}_t$ can be obtained using attention mechanism. $\mathbf {h}_t^D$ is projected to the vocabulary $V$ using a linear layer with weight matrix $\mathbf {W}_v \in \mathbb {R}^{\vert V \vert \times d_h}$ and bias vector $\mathbf {b}_v \in \mathbb {R}^{\vert V \vert }$ (projection layer).
$\mathbf {o}_t$ represents the normalized scores of all the words in the embedding vocabulary at time step $t$. $\mathbf {h}_{t-1}^D$ is the previous hidden state of the LSTM.
The projection layer of the decoder maps the decoder output to the entire vocabulary. During training, we use the gold label target tokens directly. However, during inference, the decoder may predict a token from the vocabulary which is not present in the current sentence or the set of relations or the special tokens. To prevent this, we use a masking technique while applying the softmax operation at the projection layer. We mask (exclude) all words of the vocabulary except the current source sentence tokens, relation tokens, separator tokens (`;', `$\vert $'), UNK, and EOS tokens in the softmax operation. To mask (exclude) some word from softmax, we set the corresponding value in $\hat{\mathbf {o}}_t$ at $-\infty $ and the corresponding softmax score will be zero. This ensures the copying of entities from the source sentence only. We include the UNK token in the softmax operation to make sure that the model generates new entities during inference. If the decoder predicts an UNK token, we replace it with the corresponding source word which has the highest attention score. During inference, after decoding is finished, we extract all tuples based on the special tokens, remove duplicate tuples and tuples in which both entities are the same or tuples where the relation token is not from the relation set. This model is referred to as WordDecoding (WDec) henceforth.
Encoder-Decoder Architecture ::: Pointer Network-Based Decoder
In the second approach, we identify the entities in the sentence using their start and end locations. We remove the special tokens and relation names from the word vocabulary and word embeddings are used only at the encoder side along with character embeddings. We use an additional relation embedding matrix $\mathbf {E}_r \in \mathbb {R}^{\vert R \vert \times d_r}$ at the decoder side of our model, where $R$ is the set of relations and $d_r$ is the dimension of relation vectors. The relation set $R$ includes a special relation token EOS which indicates the end of the sequence. Relation tuples are represented as a sequence $T=y_0, y_1,....,y_m$, where $y_t$ is a tuple consisting of four indexes in the source sentence indicating the start and end location of the two entities and a relation between them (see Table TABREF1). $y_0$ is a dummy tuple that represents the start tuple of the sequence and $y_m$ functions as the end tuple of the sequence which has EOS as the relation (entities are ignored for this tuple). The decoder consists of an LSTM with hidden dimension $d_h$ to generate the sequence of tuples, two pointer networks to find the two entities, and a classification network to find the relation of a tuple. At time step $t$, the decoder takes the source sentence encoding ($\mathbf {e}_t \in \mathbb {R}^{d_h}$) and the representation of all previously generated tuples ($\mathbf {y}_{prev}=\sum _{j=0}^{t-1}\mathbf {y}_{j}$) as the input and generates the hidden representation of the current tuple, $\mathbf {h}_t^D \in \mathbb {R}^{d_h}$. The sentence encoding vector $\mathbf {e}_t$ is obtained using an attention mechanism as explained later. Relation tuples are a set and to prevent the decoder from generating the same tuple again, we pass the information about all previously generated tuples at each time step of decoding. $\mathbf {y}_j$ is the vector representation of the tuple predicted at time step $j < t$ and we use the zero vector ($\mathbf {y}_0=\overrightarrow{0}$) to represent the dummy tuple $y_0$. $\mathbf {h}_{t-1}^D$ is the hidden state of the LSTM at time step $t-1$.
Encoder-Decoder Architecture ::: Relation Tuple Extraction
After obtaining the hidden representation of the current tuple $\mathbf {h}_t^D$, we first find the start and end pointers of the two entities in the source sentence. We concatenate the vector $\mathbf {h}_t^D$ with the hidden vectors $\mathbf {h}_i^E$ of the encoder and pass them to a Bi-LSTM layer with hidden dimension $d_p$ for forward and backward LSTM. The hidden vectors of this Bi-LSTM layer $\mathbf {h}_i^k \in \mathbb {R}^{2d_p}$ are passed to two feed-forward networks (FFN) with softmax to convert each hidden vector into two scalar values between 0 and 1. Softmax operation is applied across all the words in the input sentence. These two scalar values represent the probability of the corresponding source sentence token to be the start and end location of the first entity. This Bi-LSTM layer with the two feed-forward layers is the first pointer network which identifies the first entity of the current relation tuple.
where $\mathbf {W}_s^1 \in \mathbb {R}^{1 \times 2d_p}$, $\mathbf {W}_e^1 \in \mathbb {R}^{1 \times 2d_p}$, ${b}_s^1$, and ${b}_e^1$ are the weights and bias parameters of the feed-forward layers. ${s}_i^1$, ${e}_i^1$ represent the normalized probabilities of the $i$th source word being the start and end token of the first entity of the predicted tuple. We use another pointer network to extract the second entity of the tuple. We concatenate the hidden vectors $\mathbf {h}_i^k$ with $\mathbf {h}_t^D$ and $\mathbf {h}_i^E$ and pass them to the second pointer network to obtain ${s}_i^2$ and ${e}_i^2$, which represent the normalized probabilities of the $i$th source word being the start and end of the second entity. These normalized probabilities are used to find the vector representation of the two entities, $\mathbf {a}_t^1$ and $\mathbf {a}_t^2$.
We concatenate the entity vector representations $\mathbf {a}_t^1$ and $\mathbf {a}_t^2$ with $\mathbf {h}_t^D$ and pass it to a feed-forward network (FFN) with softmax to find the relation. This feed-forward layer has a weight matrix $\mathbf {W}_r \in \mathbb {R}^{\vert R \vert \times (8d_p + d_h)}$ and a bias vector $\mathbf {b}_r \in \mathbb {R}^{\vert R \vert }$.
$\mathbf {r}_t$ represents the normalized probabilities of the relation at time step $t$. The relation embedding vector $\mathbf {z}_t$ is obtained using $\mathrm {argmax}$ of $\mathbf {r}_t$ and $\mathbf {E}_r$. $\mathbf {y}_t \in \mathbb {R}^{(8d_p + d_r)}$ is the vector representation of the tuple predicted at time step $t$. During training, we pass the embedding vector of the gold label relation in place of the predicted relation. So the $\mathrm {argmax}$ function does not affect the back-propagation during training. The decoder stops the sequence generation process when the predicted relation is EOS. This is the classification network of the decoder.
During inference, we select the start and end location of the two entities such that the product of the four pointer probabilities is maximized keeping the constraints that the two entities do not overlap with each other and $1 \le b \le e \le n$ where $b$ and $e$ are the start and end location of the corresponding entities. We first choose the start and end location of entity 1 based on the maximum product of the corresponding start and end pointer probabilities. Then we find entity 2 in a similar way excluding the span of entity 1 to avoid overlap. The same procedure is repeated but this time we first find entity 2 followed by entity 1. We choose that pair of entities which gives the higher product of four pointer probabilities between these two choices. This model is referred to as PtrNetDecoding (PNDec) henceforth.
Encoder-Decoder Architecture ::: Attention Modeling
We experimented with three different attention mechanisms for our word-level decoding model to obtain the source context vector $\mathbf {e}_t$:
(1) Avg.: The context vector is obtained by averaging the hidden vectors of the encoder: $\mathbf {e}_t=\frac{1}{n}\sum _{i=1}^n \mathbf {h}_i^E$
(2) N-gram: The context vector is obtained by the N-gram attention mechanism of BIBREF9 (BIBREF9) with N=3.
$\textnormal {a}_i^g=(\mathbf {h}_n^{E})^T \mathbf {V}^g \mathbf {w}_i^g$, $\alpha ^g = \mathrm {softmax}(\mathbf {a}^g)$
$\mathbf {e}_t=[\mathbf {h}_n^E \Vert \sum _{g=1}^N \mathbf {W}^g (\sum _{i=1}^{\vert G^g \vert } \alpha _i^g \mathbf {w}_i^g)$]
Here, $\mathbf {h}_n^E$ is the last hidden state of the encoder, $g \in \lbrace 1, 2, 3\rbrace $ refers to the word gram combination, $G^g$ is the sequence of g-gram word representations for the input sentence, $\mathbf {w}_i^g$ is the $i$th g-gram vector (2-gram and 3-gram representations are obtained by average pooling), $\alpha _i^g$ is the normalized attention score for the $i$th g-gram vector, $\mathbf {W} \in \mathbb {R}^{d_h \times d_h}$ and $\mathbf {V} \in \mathbb {R}^{d_h \times d_h}$ are trainable parameters.
(3) Single: The context vector is obtained by the attention mechanism proposed by BIBREF10 (BIBREF10). This attention mechanism gives the best performance with the word-level decoding model.
$\mathbf {u}_t^i = \mathbf {W}_{u} \mathbf {h}_i^E, \quad \mathbf {q}_t^i = \mathbf {W}_{q} \mathbf {h}_{t-1}^D + \mathbf {b}_{q}$,
$\textnormal {a}_t^i = \mathbf {v}_a \tanh (\mathbf {q}_t^i + \mathbf {u}_t^i), \quad \alpha _t = \mathrm {softmax}(\mathbf {a}_t)$,
$\mathbf {e}_t = \sum _{i=1}^n \alpha _t^i \mathbf {h}_i^E$
where $\mathbf {W}_u \in \mathbb {R}^{d_h \times d_h}$, $\mathbf {W}_q \in \mathbb {R}^{d_h \times d_h}$, and $\mathbf {v}_a \in \mathbb {R}^{d_h}$ are all trainable attention parameters and $\mathbf {b}_q \in \mathbb {R}^{d_h}$ is a bias vector. $\alpha _t^i$ is the normalized attention score of the $i$th source word at the decoding time step $t$.
For our pointer network-based decoding model, we use three variants of the single attention model. First, we use $\mathbf {h}_{t-1}^D$ to calculate $\mathbf {q}_t^i$ in the attention mechanism. Next, we use $\mathbf {y}_{prev}$ to calculate $\mathbf {q}_t^i$, where $\mathbf {W}_q \in \mathbb {R}^{(8d_p + d_r) \times d_h}$. In the final variant, we obtain the attentive context vector by concatenating the two attentive vectors obtained using $\mathbf {h}_{t-1}^D$ and $\mathbf {y}_{prev}$. This gives the best performance with the pointer network-based decoding model. These variants are referred to as $\mathrm {dec_{hid}}$, $\mathrm {tup_{prev}}$, and $\mathrm {combo}$ in Table TABREF17.
Encoder-Decoder Architecture ::: Loss Function
We minimize the negative log-likelihood loss of the generated words for word-level decoding ($\mathcal {L}_{word}$) and minimize the sum of negative log-likelihood loss of relation classification and the four pointer locations for pointer network-based decoding ($\mathcal {L}_{ptr}$).
$v_t^b$ is the softmax score of the target word at time step $t$ for the word-level decoding model. $r$, $s$, and $e$ are the softmax score of the corresponding true relation label, true start and end pointer location of an entity. $b$, $t$, and $c$ refer to the $b$th training instance, $t$th time step of decoding, and the two entities of a tuple respectively. $B$ and $T$ are the batch size and maximum time step of the decoder respectively.
Experiments ::: Datasets
We focus on the task of extracting multiple tuples with overlapping entities from sentences. We choose the New York Times (NYT) corpus for our experiments. This corpus has multiple versions, and we choose the following two versions as their test dataset has significantly larger number of instances of multiple relation tuples with overlapping entities. (i) The first version is used by BIBREF6 (BIBREF6) (mentioned as NYT in their paper) and has 24 relations. We name this version as NYT24. (ii) The second version is used by BIBREF11 (BIBREF11) (mentioned as NYT10 in their paper) and has 29 relations. We name this version as NYT29. We select 10% of the original training data and use it as the validation dataset. The remaining 90% is used for training. We include statistics of the training and test datasets in Table TABREF11.
Experiments ::: Parameter Settings
We run the Word2Vec BIBREF12 tool on the NYT corpus to initialize the word embeddings. The character embeddings and relation embeddings are initialized randomly. All embeddings are updated during training. We set the word embedding dimension $d_w=300$, relation embedding dimension $d_r=300$, character embedding dimension $d_c=50$, and character-based word feature dimension $d_f=50$. To extract the character-based word feature vector, we set the CNN filter width at 3 and the maximum length of a word at 10. The hidden dimension $d_h$ of the decoder LSTM cell is set at 300 and the hidden dimension of the forward and the backward LSTM of the encoder is set at 150. The hidden dimension of the forward and backward LSTM of the pointer networks is set at $d_p=300$. The model is trained with mini-batch size of 32 and the network parameters are optimized using Adam BIBREF13. Dropout layers with a dropout rate fixed at $0.3$ are used in our network to avoid overfitting.
Experiments ::: Baselines and Evaluation Metrics
We compare our model with the following state-of-the-art joint entity and relation extraction models:
(1) SPTree BIBREF4: This is an end-to-end neural entity and relation extraction model using sequence LSTM and Tree LSTM. Sequence LSTM is used to identify all the entities first and then Tree LSTM is used to find the relation between all pairs of entities.
(2) Tagging BIBREF5: This is a neural sequence tagging model which jointly extracts the entities and relations using an LSTM encoder and an LSTM decoder. They used a Cartesian product of entity tags and relation tags to encode the entity and relation information together. This model does not work when tuples have overlapping entities.
(3) CopyR BIBREF6: This model uses an encoder-decoder approach for joint extraction of entities and relations. It copies only the last token of an entity from the source sentence. Their best performing multi-decoder model is trained with a fixed number of decoders where each decoder extracts one tuple.
(4) HRL BIBREF11: This model uses a reinforcement learning (RL) algorithm with two levels of hierarchy for tuple extraction. A high-level RL finds the relation and a low-level RL identifies the two entities using a sequence tagging approach. This sequence tagging approach cannot always ensure extraction of exactly two entities.
(5) GraphR BIBREF14: This model considers each token in a sentence as a node in a graph, and edges connecting the nodes as relations between them. They use graph convolution network (GCN) to predict the relations of every edge and then filter out some of the relations.
(6) N-gram Attention BIBREF9: This model uses an encoder-decoder approach with N-gram attention mechanism for knowledge-base completion using distantly supervised data. The encoder uses the source tokens as its vocabulary and the decoder uses the entire Wikidata BIBREF15 entity IDs and relation IDs as its vocabulary. The encoder takes the source sentence as input and the decoder outputs the two entity IDs and relation ID for every tuple. During training, it uses the mapping of entity names and their Wikidata IDs of the entire Wikidata for proper alignment. Our task of extracting relation tuples with the raw entity names from a sentence is more challenging since entity names are not of fixed length. Our more generic approach is also helpful for extracting new entities which are not present in the existing knowledge bases such as Wikidata. We use their N-gram attention mechanism in our model to compare its performance with other attention models (Table TABREF17).
We use the same evaluation method used by BIBREF11 (BIBREF11) in their experiments. We consider the extracted tuples as a set and remove the duplicate tuples. An extracted tuple is considered as correct if the corresponding full entity names are correct and the relation is also correct. We report precision, recall, and F1 score for comparison.
Experiments ::: Experimental Results
Among the baselines, HRL achieves significantly higher F1 scores on the two datasets. We run their model and our models five times and report the median results in Table TABREF15. Scores of other baselines in Table TABREF15 are taken from previous published papers BIBREF6, BIBREF11, BIBREF14. Our WordDecoding (WDec) model achieves F1 scores that are $3.9\%$ and $4.1\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. Similarly, our PtrNetDecoding (PNDec) model achieves F1 scores that are $3.0\%$ and $1.3\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. We perform a statistical significance test (t-test) under a bootstrap pairing between HRL and our models and see that the higher F1 scores achieved by our models are statistically significant ($p < 0.001$). Next, we combine the outputs of five runs of our models and five runs of HRL to build ensemble models. For a test instance, we include those tuples which are extracted in the majority ($\ge 3$) of the five runs. This ensemble mechanism increases the precision significantly on both datasets with a small improvement in recall as well. In the ensemble scenario, compared to HRL, WDec achieves $4.2\%$ and $3.5\%$ higher F1 scores and PNDec achieves $4.2\%$ and $2.9\%$ higher F1 scores on the NYT29 and NYT24 datasets respectively.
Analysis and Discussion ::: Ablation Studies
We include the performance of different attention mechanisms with our WordDecoding model, effects of our masking-based copy mechanism, and ablation results of three variants of the single attention mechanism with our PtrNetDecoding model in Table TABREF17. WordDecoding with single attention achieves the highest F1 score on both datasets. We also see that our copy mechanism improves F1 scores by around 4–7% in each attention mechanism with both datasets. PtrNetDecoding achieves the highest F1 scores when we combine the two attention mechanisms with respect to the previous hidden vector of the decoder LSTM ($\mathbf {h}_{t-1}^D$) and representation of all previously extracted tuples ($\mathbf {y}_{prev}$).
Analysis and Discussion ::: Performance Analysis
From Table TABREF15, we see that CopyR, HRL, and our models achieve significantly higher F1 scores on the NYT24 dataset than the NYT29 dataset. Both datasets have a similar set of relations and similar texts (NYT). So task-wise both datasets should pose a similar challenge. However, the F1 scores suggest that the NYT24 dataset is easier than NYT29. The reason is that NYT24 has around 72.0% of overlapping tuples between the training and test data (% of test tuples that appear in the training data with different source sentences). In contrast, NYT29 has only 41.7% of overlapping tuples. Due to the memorization power of deep neural networks, it can achieve much higher F1 score on NYT24. The difference between the F1 scores of WordDecoding and PtrNetDecoding on NYT24 is marginally higher than NYT29, since WordDecoding has more trainable parameters (about 27 million) than PtrNetDecoding (about 24.5 million) and NYT24 has very high tuple overlap. However, their ensemble versions achieve closer F1 scores on both datasets.
Despite achieving marginally lower F1 scores, the pointer network-based model can be considered more intuitive and suitable for this task. WordDecoding may not extract the special tokens and relation tokens at the right time steps, which is critical for finding the tuples from the generated sequence of words. PtrNetDecoding always extracts two entities of varying length and a relation for every tuple. We also observe that PtrNetDecoding is more than two times faster and takes one-third of the GPU memory of WordDecoding during training and inference. This speedup and smaller memory consumption are achieved due to the fewer number of decoding steps of PtrNetDecoding compared to WordDecoding. PtrNetDecoding extracts an entire tuple at each time step, whereas WordDecoding extracts just one word at each time step and so requires eight time steps on average to extract a tuple (assuming that the average length of an entity is two). The softmax operation at the projection layer of WordDecoding is applied across the entire vocabulary and the vocabulary size can be large (more than 40,000 for our datasets). In case of PtrNetDecoding, the softmax operation is applied across the sentence length (maximum of 100 in our experiments) and across the relation set (24 and 29 for our datasets). The costly softmax operation and the higher number of decoding time steps significantly increase the training and inference time for WordDecoding. The encoder-decoder model proposed by BIBREF9 (BIBREF9) faces a similar softmax-related problem as their target vocabulary contains the entire Wikidata entity IDs and relation IDs which is in the millions. HRL, which uses a deep reinforcement learning algorithm, takes around 8x more time to train than PtrNetDecoding with a similar GPU configuration. The speedup and smaller memory consumption will be useful when we move from sentence-level extraction to document-level extraction, since document length is much higher than sentence length and a document contains a higher number of tuples.
Analysis and Discussion ::: Error Analysis
The relation tuples extracted by a joint model can be erroneous for multiple reasons such as: (i) extracted entities are wrong; (ii) extracted relations are wrong; (iii) pairings of entities with relations are wrong. To see the effects of the first two reasons, we analyze the performance of HRL and our models on entity generation and relation generation separately. For entity generation, we only consider those entities which are part of some tuple. For relation generation, we only consider the relations of the tuples. We include the performance of our two models and HRL on entity generation and relation generation in Table TABREF20. Our proposed models perform better than HRL on both tasks. Comparing our two models, PtrNetDecoding performs better than WordDecoding on both tasks, although WordDecoding achieves higher F1 scores in tuple extraction. This suggests that PtrNetDecoding makes more errors while pairing the entities with relations. We further analyze the outputs of our models and HRL to determine the errors due to ordering of entities (Order), mismatch of the first entity (Ent1), and mismatch of the second entity (Ent2) in Table TABREF21. WordDecoding generates fewer errors than the other two models in all the categories and thus achieves the highest F1 scores on both datasets.
Related Work
Traditionally, researchers BIBREF0, BIBREF1, BIBREF2, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25 used a pipeline approach for relation tuple extraction where relations were identified using a classification network after all entities were detected. BIBREF26 (BIBREF26) used an encoder-decoder model to extract multiple relations present between two given entities.
Recently, some researchers BIBREF3, BIBREF4, BIBREF27, BIBREF28 tried to bring these two tasks closer together by sharing their parameters and optimizing them together. BIBREF5 (BIBREF5) used a sequence tagging scheme to jointly extract the entities and relations. BIBREF6 (BIBREF6) proposed an encoder-decoder model with copy mechanism to extract relation tuples with overlapping entities. BIBREF11 (BIBREF11) proposed a joint extraction model based on reinforcement learning (RL). BIBREF14 (BIBREF14) used a graph convolution network (GCN) where they treated each token in a sentence as a node in a graph and edges were considered as relations. BIBREF9 (BIBREF9) used an N-gram attention mechanism with an encoder-decoder model for completion of knowledge bases using distant supervised data.
Encoder-decoder models have been used for many NLP applications such as neural machine translation BIBREF29, BIBREF10, BIBREF30, sentence generation from structured data BIBREF31, BIBREF32, and open information extraction BIBREF33, BIBREF34. Pointer networks BIBREF35 have been used to extract a text span from text for tasks such as question answering BIBREF36, BIBREF37. For the first time, we use pointer networks with an encoder-decoder model to extract relation tuples from sentences.
Conclusion
Extracting relation tuples from sentences is a challenging task due to different length of entities, the presence of multiple tuples, and overlapping of entities among tuples. In this paper, we propose two novel approaches using encoder-decoder architecture to address this task. Experiments on the New York Times (NYT) corpus show that our proposed models achieve significantly improved new state-of-the-art F1 scores. As future work, we would like to explore our proposed models for a document-level tuple extraction task.
Acknowledgments
We would like to thank the anonymous reviewers for their valuable and constructive comments on this paper. | WordDecoding (WDec) model achieves F1 scores that are $3.9\%$ and $4.1\%$ higher than HRL on the NYT29 and NYT24 datasets respectively, PtrNetDecoding (PNDec) model achieves F1 scores that are $3.0\%$ and $1.3\%$ higher than HRL on the NYT29 and NYT24 datasets respectively |
1898f999626f9a6da637bd8b4857e5eddf2fc729 | 1898f999626f9a6da637bd8b4857e5eddf2fc729_1 | Q: How higher are F1 scores compared to previous work?
Text: Introduction
Distantly-supervised information extraction systems extract relation tuples with a set of pre-defined relations from text. Traditionally, researchers BIBREF0, BIBREF1, BIBREF2 use pipeline approaches where a named entity recognition (NER) system is used to identify the entities in a sentence and then a classifier is used to find the relation (or no relation) between them. However, due to the complete separation of entity detection and relation classification, these models miss the interaction between multiple relation tuples present in a sentence.
Recently, several neural network-based models BIBREF3, BIBREF4 were proposed to jointly extract entities and relations from a sentence. These models used a parameter-sharing mechanism to extract the entities and relations in the same network. But they still find the relations after identifying all the entities and do not fully capture the interaction among multiple tuples. BIBREF5 (BIBREF5) proposed a joint extraction model based on neural sequence tagging scheme. But their model could not extract tuples with overlapping entities in a sentence as it could not assign more than one tag to a word. BIBREF6 (BIBREF6) proposed a neural encoder-decoder model for extracting relation tuples with overlapping entities. However, they used a copy mechanism to copy only the last token of the entities, thus this model could not extract the full entity names. Also, their best performing model used a separate decoder to extract each tuple which limited the power of their model. This model was trained with a fixed number of decoders and could not extract tuples beyond that number during inference. Encoder-decoder models are powerful models and they are successful in many NLP tasks such as machine translation, sentence generation from structured data, and open information extraction.
In this paper, we explore how encoder-decoder models can be used effectively for extracting relation tuples from sentences. There are three major challenges in this task: (i) The model should be able to extract entities and relations together. (ii) It should be able to extract multiple tuples with overlapping entities. (iii) It should be able to extract exactly two entities of a tuple with their full names. To address these challenges, we propose two novel approaches using encoder-decoder architecture. We first propose a new representation scheme for relation tuples (Table TABREF1) such that it can represent multiple tuples with overlapping entities and different lengths of entities in a simple way. We employ an encoder-decoder model where the decoder extracts one word at a time like machine translation models. At the end of sequence generation, due to the unique representation of the tuples, we can extract the tuples from the sequence of words. Although this model performs quite well, generating one word at a time is somewhat unnatural for this task. Each tuple has exactly two entities and one relation, and each entity appears as a continuous text span in a sentence. The most effective way to identify them is to find their start and end location in the sentence. Each relation tuple can then be represented using five items: start and end location of the two entities and the relation between them (see Table TABREF1). Keeping this in mind, we propose a pointer network-based decoding framework. This decoder consists of two pointer networks which find the start and end location of the two entities in a sentence, and a classification network which identifies the relation between them. At every time step of the decoding, this decoder extracts an entire relation tuple, not just a word. Experiments on the New York Times (NYT) datasets show that our approaches work effectively for this task and achieve state-of-the-art performance. To summarize, the contributions of this paper are as follows:
(1) We propose a new representation scheme for relation tuples such that an encoder-decoder model, which extracts one word at each time step, can still find multiple tuples with overlapping entities and tuples with multi-token entities from sentences. We also propose a masking-based copy mechanism to extract the entities from the source sentence only.
(2) We propose a modification in the decoding framework with pointer networks to make the encoder-decoder model more suitable for this task. At every time step, this decoder extracts an entire relation tuple, not just a word. This new decoding framework helps in speeding up the training process and uses less resources (GPU memory). This will be an important factor when we move from sentence-level tuple extraction to document-level extraction.
(3) Experiments on the NYT datasets show that our approaches outperform all the previous state-of-the-art models significantly and set a new benchmark on these datasets.
Task Description
A relation tuple consists of two entities and a relation. Such tuples can be found in sentences where an entity is a text span in a sentence and a relation comes from a pre-defined set $R$. These tuples may share one or both entities among them. Based on this, we divide the sentences into three classes: (i) No Entity Overlap (NEO): A sentence in this class has one or more tuples, but they do not share any entities. (ii) Entity Pair Overlap (EPO): A sentence in this class has more than one tuple, and at least two tuples share both the entities in the same or reverse order. (iii) Single Entity Overlap (SEO): A sentence in this class has more than one tuple and at least two tuples share exactly one entity. It should be noted that a sentence can belong to both EPO and SEO classes. Our task is to extract all relation tuples present in a sentence.
Encoder-Decoder Architecture
In this task, input to the system is a sequence of words, and output is a set of relation tuples. In our first approach, we represent each tuple as entity1 ; entity2 ; relation. We use `;' as a separator token to separate the tuple components. Multiple tuples are separated using the `$\vert $' token. We have included one example of such representation in Table TABREF1. Multiple relation tuples with overlapping entities and different lengths of entities can be represented in a simple way using these special tokens (; and $\vert $). During inference, after the end of sequence generation, relation tuples can be extracted easily using these special tokens. Due to this uniform representation scheme, where entity tokens, relation tokens, and special tokens are treated similarly, we use a shared vocabulary between the encoder and decoder which includes all of these tokens. The input sentence contains clue words for every relation which can help generate the relation tokens. We use two special tokens so that the model can distinguish between the beginning of a relation tuple and the beginning of a tuple component. To extract the relation tuples from a sentence using the encoder-decoder model, the model has to generate the entity tokens, find relation clue words and map them to the relation tokens, and generate the special tokens at appropriate time. Our experiments show that the encoder-decoder models can achieve this quite effectively.
Encoder-Decoder Architecture ::: Embedding Layer & Encoder
We create a single vocabulary $V$ consisting of the source sentence tokens, relation names from relation set $R$, special separator tokens (`;', `$\vert $'), start-of-target-sequence token (SOS), end-of-target-sequence token (EOS), and unknown word token (UNK). Word-level embeddings are formed by two components: (1) pre-trained word vectors (2) character embedding-based feature vectors. We use a word embedding layer $\mathbf {E}_w \in \mathbb {R}^{\vert V \vert \times d_w}$ and a character embedding layer $\mathbf {E}_c \in \mathbb {R}^{\vert A \vert \times d_c}$, where $d_w$ is the dimension of word vectors, $A$ is the character alphabet of input sentence tokens, and $d_c$ is the dimension of character embedding vectors. Following BIBREF7 (BIBREF7), we use a convolutional neural network with max-pooling to extract a feature vector of size $d_f$ for every word. Word embeddings and character embedding-based feature vectors are concatenated ($\Vert $) to obtain the representation of the input tokens.
A source sentence $\mathbf {S}$ is represented by vectors of its tokens $\mathbf {x}_1, \mathbf {x}_2,....,\mathbf {x}_n$, where $\mathbf {x}_i \in \mathbb {R}^{(d_w+d_f)}$ is the vector representation of the $i$th word and $n$ is the length of $\mathbf {S}$. These vectors $\mathbf {x}_i$ are passed to a bi-directional LSTM BIBREF8 (Bi-LSTM) to obtain the hidden representation $\mathbf {h}_i^E$. We set the hidden dimension of the forward and backward LSTM of the Bi-LSTM to be $d_h/2$ to obtain $\mathbf {h}_i^E \in \mathbb {R}^{d_h}$, where $d_h$ is the hidden dimension of the sequence generator LSTM of the decoder described below.
Encoder-Decoder Architecture ::: Word-level Decoder & Copy Mechanism
A target sequence $\mathbf {T}$ is represented by only word embedding vectors of its tokens $\mathbf {y}_0, \mathbf {y}_1,....,\mathbf {y}_m$ where $\mathbf {y}_i \in \mathbb {R}^{d_w}$ is the embedding vector of the $i$th token and $m$ is the length of the target sequence. $\mathbf {y}_0$ and $\mathbf {y}_m$ represent the embedding vector of the SOS and EOS token respectively. The decoder generates one token at a time and stops when EOS is generated. We use an LSTM as the decoder and at time step $t$, the decoder takes the source sentence encoding ($\mathbf {e}_t \in \mathbb {R}^{d_h}$) and the previous target word embedding ($\mathbf {y}_{t-1}$) as the input and generates the hidden representation of the current token ($\mathbf {h}_t^D \in \mathbb {R}^{d_h}$). The sentence encoding vector $\mathbf {e}_t$ can be obtained using attention mechanism. $\mathbf {h}_t^D$ is projected to the vocabulary $V$ using a linear layer with weight matrix $\mathbf {W}_v \in \mathbb {R}^{\vert V \vert \times d_h}$ and bias vector $\mathbf {b}_v \in \mathbb {R}^{\vert V \vert }$ (projection layer).
$\mathbf {o}_t$ represents the normalized scores of all the words in the embedding vocabulary at time step $t$. $\mathbf {h}_{t-1}^D$ is the previous hidden state of the LSTM.
The projection layer of the decoder maps the decoder output to the entire vocabulary. During training, we use the gold label target tokens directly. However, during inference, the decoder may predict a token from the vocabulary which is not present in the current sentence or the set of relations or the special tokens. To prevent this, we use a masking technique while applying the softmax operation at the projection layer. We mask (exclude) all words of the vocabulary except the current source sentence tokens, relation tokens, separator tokens (`;', `$\vert $'), UNK, and EOS tokens in the softmax operation. To mask (exclude) some word from softmax, we set the corresponding value in $\hat{\mathbf {o}}_t$ at $-\infty $ and the corresponding softmax score will be zero. This ensures the copying of entities from the source sentence only. We include the UNK token in the softmax operation to make sure that the model generates new entities during inference. If the decoder predicts an UNK token, we replace it with the corresponding source word which has the highest attention score. During inference, after decoding is finished, we extract all tuples based on the special tokens, remove duplicate tuples and tuples in which both entities are the same or tuples where the relation token is not from the relation set. This model is referred to as WordDecoding (WDec) henceforth.
Encoder-Decoder Architecture ::: Pointer Network-Based Decoder
In the second approach, we identify the entities in the sentence using their start and end locations. We remove the special tokens and relation names from the word vocabulary and word embeddings are used only at the encoder side along with character embeddings. We use an additional relation embedding matrix $\mathbf {E}_r \in \mathbb {R}^{\vert R \vert \times d_r}$ at the decoder side of our model, where $R$ is the set of relations and $d_r$ is the dimension of relation vectors. The relation set $R$ includes a special relation token EOS which indicates the end of the sequence. Relation tuples are represented as a sequence $T=y_0, y_1,....,y_m$, where $y_t$ is a tuple consisting of four indexes in the source sentence indicating the start and end location of the two entities and a relation between them (see Table TABREF1). $y_0$ is a dummy tuple that represents the start tuple of the sequence and $y_m$ functions as the end tuple of the sequence which has EOS as the relation (entities are ignored for this tuple). The decoder consists of an LSTM with hidden dimension $d_h$ to generate the sequence of tuples, two pointer networks to find the two entities, and a classification network to find the relation of a tuple. At time step $t$, the decoder takes the source sentence encoding ($\mathbf {e}_t \in \mathbb {R}^{d_h}$) and the representation of all previously generated tuples ($\mathbf {y}_{prev}=\sum _{j=0}^{t-1}\mathbf {y}_{j}$) as the input and generates the hidden representation of the current tuple, $\mathbf {h}_t^D \in \mathbb {R}^{d_h}$. The sentence encoding vector $\mathbf {e}_t$ is obtained using an attention mechanism as explained later. Relation tuples are a set and to prevent the decoder from generating the same tuple again, we pass the information about all previously generated tuples at each time step of decoding. $\mathbf {y}_j$ is the vector representation of the tuple predicted at time step $j < t$ and we use the zero vector ($\mathbf {y}_0=\overrightarrow{0}$) to represent the dummy tuple $y_0$. $\mathbf {h}_{t-1}^D$ is the hidden state of the LSTM at time step $t-1$.
Encoder-Decoder Architecture ::: Relation Tuple Extraction
After obtaining the hidden representation of the current tuple $\mathbf {h}_t^D$, we first find the start and end pointers of the two entities in the source sentence. We concatenate the vector $\mathbf {h}_t^D$ with the hidden vectors $\mathbf {h}_i^E$ of the encoder and pass them to a Bi-LSTM layer with hidden dimension $d_p$ for forward and backward LSTM. The hidden vectors of this Bi-LSTM layer $\mathbf {h}_i^k \in \mathbb {R}^{2d_p}$ are passed to two feed-forward networks (FFN) with softmax to convert each hidden vector into two scalar values between 0 and 1. Softmax operation is applied across all the words in the input sentence. These two scalar values represent the probability of the corresponding source sentence token to be the start and end location of the first entity. This Bi-LSTM layer with the two feed-forward layers is the first pointer network which identifies the first entity of the current relation tuple.
where $\mathbf {W}_s^1 \in \mathbb {R}^{1 \times 2d_p}$, $\mathbf {W}_e^1 \in \mathbb {R}^{1 \times 2d_p}$, ${b}_s^1$, and ${b}_e^1$ are the weights and bias parameters of the feed-forward layers. ${s}_i^1$, ${e}_i^1$ represent the normalized probabilities of the $i$th source word being the start and end token of the first entity of the predicted tuple. We use another pointer network to extract the second entity of the tuple. We concatenate the hidden vectors $\mathbf {h}_i^k$ with $\mathbf {h}_t^D$ and $\mathbf {h}_i^E$ and pass them to the second pointer network to obtain ${s}_i^2$ and ${e}_i^2$, which represent the normalized probabilities of the $i$th source word being the start and end of the second entity. These normalized probabilities are used to find the vector representation of the two entities, $\mathbf {a}_t^1$ and $\mathbf {a}_t^2$.
We concatenate the entity vector representations $\mathbf {a}_t^1$ and $\mathbf {a}_t^2$ with $\mathbf {h}_t^D$ and pass it to a feed-forward network (FFN) with softmax to find the relation. This feed-forward layer has a weight matrix $\mathbf {W}_r \in \mathbb {R}^{\vert R \vert \times (8d_p + d_h)}$ and a bias vector $\mathbf {b}_r \in \mathbb {R}^{\vert R \vert }$.
$\mathbf {r}_t$ represents the normalized probabilities of the relation at time step $t$. The relation embedding vector $\mathbf {z}_t$ is obtained using $\mathrm {argmax}$ of $\mathbf {r}_t$ and $\mathbf {E}_r$. $\mathbf {y}_t \in \mathbb {R}^{(8d_p + d_r)}$ is the vector representation of the tuple predicted at time step $t$. During training, we pass the embedding vector of the gold label relation in place of the predicted relation. So the $\mathrm {argmax}$ function does not affect the back-propagation during training. The decoder stops the sequence generation process when the predicted relation is EOS. This is the classification network of the decoder.
During inference, we select the start and end location of the two entities such that the product of the four pointer probabilities is maximized keeping the constraints that the two entities do not overlap with each other and $1 \le b \le e \le n$ where $b$ and $e$ are the start and end location of the corresponding entities. We first choose the start and end location of entity 1 based on the maximum product of the corresponding start and end pointer probabilities. Then we find entity 2 in a similar way excluding the span of entity 1 to avoid overlap. The same procedure is repeated but this time we first find entity 2 followed by entity 1. We choose that pair of entities which gives the higher product of four pointer probabilities between these two choices. This model is referred to as PtrNetDecoding (PNDec) henceforth.
Encoder-Decoder Architecture ::: Attention Modeling
We experimented with three different attention mechanisms for our word-level decoding model to obtain the source context vector $\mathbf {e}_t$:
(1) Avg.: The context vector is obtained by averaging the hidden vectors of the encoder: $\mathbf {e}_t=\frac{1}{n}\sum _{i=1}^n \mathbf {h}_i^E$
(2) N-gram: The context vector is obtained by the N-gram attention mechanism of BIBREF9 (BIBREF9) with N=3.
$\textnormal {a}_i^g=(\mathbf {h}_n^{E})^T \mathbf {V}^g \mathbf {w}_i^g$, $\alpha ^g = \mathrm {softmax}(\mathbf {a}^g)$
$\mathbf {e}_t=[\mathbf {h}_n^E \Vert \sum _{g=1}^N \mathbf {W}^g (\sum _{i=1}^{\vert G^g \vert } \alpha _i^g \mathbf {w}_i^g)$]
Here, $\mathbf {h}_n^E$ is the last hidden state of the encoder, $g \in \lbrace 1, 2, 3\rbrace $ refers to the word gram combination, $G^g$ is the sequence of g-gram word representations for the input sentence, $\mathbf {w}_i^g$ is the $i$th g-gram vector (2-gram and 3-gram representations are obtained by average pooling), $\alpha _i^g$ is the normalized attention score for the $i$th g-gram vector, $\mathbf {W} \in \mathbb {R}^{d_h \times d_h}$ and $\mathbf {V} \in \mathbb {R}^{d_h \times d_h}$ are trainable parameters.
(3) Single: The context vector is obtained by the attention mechanism proposed by BIBREF10 (BIBREF10). This attention mechanism gives the best performance with the word-level decoding model.
$\mathbf {u}_t^i = \mathbf {W}_{u} \mathbf {h}_i^E, \quad \mathbf {q}_t^i = \mathbf {W}_{q} \mathbf {h}_{t-1}^D + \mathbf {b}_{q}$,
$\textnormal {a}_t^i = \mathbf {v}_a \tanh (\mathbf {q}_t^i + \mathbf {u}_t^i), \quad \alpha _t = \mathrm {softmax}(\mathbf {a}_t)$,
$\mathbf {e}_t = \sum _{i=1}^n \alpha _t^i \mathbf {h}_i^E$
where $\mathbf {W}_u \in \mathbb {R}^{d_h \times d_h}$, $\mathbf {W}_q \in \mathbb {R}^{d_h \times d_h}$, and $\mathbf {v}_a \in \mathbb {R}^{d_h}$ are all trainable attention parameters and $\mathbf {b}_q \in \mathbb {R}^{d_h}$ is a bias vector. $\alpha _t^i$ is the normalized attention score of the $i$th source word at the decoding time step $t$.
For our pointer network-based decoding model, we use three variants of the single attention model. First, we use $\mathbf {h}_{t-1}^D$ to calculate $\mathbf {q}_t^i$ in the attention mechanism. Next, we use $\mathbf {y}_{prev}$ to calculate $\mathbf {q}_t^i$, where $\mathbf {W}_q \in \mathbb {R}^{(8d_p + d_r) \times d_h}$. In the final variant, we obtain the attentive context vector by concatenating the two attentive vectors obtained using $\mathbf {h}_{t-1}^D$ and $\mathbf {y}_{prev}$. This gives the best performance with the pointer network-based decoding model. These variants are referred to as $\mathrm {dec_{hid}}$, $\mathrm {tup_{prev}}$, and $\mathrm {combo}$ in Table TABREF17.
Encoder-Decoder Architecture ::: Loss Function
We minimize the negative log-likelihood loss of the generated words for word-level decoding ($\mathcal {L}_{word}$) and minimize the sum of negative log-likelihood loss of relation classification and the four pointer locations for pointer network-based decoding ($\mathcal {L}_{ptr}$).
$v_t^b$ is the softmax score of the target word at time step $t$ for the word-level decoding model. $r$, $s$, and $e$ are the softmax score of the corresponding true relation label, true start and end pointer location of an entity. $b$, $t$, and $c$ refer to the $b$th training instance, $t$th time step of decoding, and the two entities of a tuple respectively. $B$ and $T$ are the batch size and maximum time step of the decoder respectively.
Experiments ::: Datasets
We focus on the task of extracting multiple tuples with overlapping entities from sentences. We choose the New York Times (NYT) corpus for our experiments. This corpus has multiple versions, and we choose the following two versions as their test dataset has significantly larger number of instances of multiple relation tuples with overlapping entities. (i) The first version is used by BIBREF6 (BIBREF6) (mentioned as NYT in their paper) and has 24 relations. We name this version as NYT24. (ii) The second version is used by BIBREF11 (BIBREF11) (mentioned as NYT10 in their paper) and has 29 relations. We name this version as NYT29. We select 10% of the original training data and use it as the validation dataset. The remaining 90% is used for training. We include statistics of the training and test datasets in Table TABREF11.
Experiments ::: Parameter Settings
We run the Word2Vec BIBREF12 tool on the NYT corpus to initialize the word embeddings. The character embeddings and relation embeddings are initialized randomly. All embeddings are updated during training. We set the word embedding dimension $d_w=300$, relation embedding dimension $d_r=300$, character embedding dimension $d_c=50$, and character-based word feature dimension $d_f=50$. To extract the character-based word feature vector, we set the CNN filter width at 3 and the maximum length of a word at 10. The hidden dimension $d_h$ of the decoder LSTM cell is set at 300 and the hidden dimension of the forward and the backward LSTM of the encoder is set at 150. The hidden dimension of the forward and backward LSTM of the pointer networks is set at $d_p=300$. The model is trained with mini-batch size of 32 and the network parameters are optimized using Adam BIBREF13. Dropout layers with a dropout rate fixed at $0.3$ are used in our network to avoid overfitting.
Experiments ::: Baselines and Evaluation Metrics
We compare our model with the following state-of-the-art joint entity and relation extraction models:
(1) SPTree BIBREF4: This is an end-to-end neural entity and relation extraction model using sequence LSTM and Tree LSTM. Sequence LSTM is used to identify all the entities first and then Tree LSTM is used to find the relation between all pairs of entities.
(2) Tagging BIBREF5: This is a neural sequence tagging model which jointly extracts the entities and relations using an LSTM encoder and an LSTM decoder. They used a Cartesian product of entity tags and relation tags to encode the entity and relation information together. This model does not work when tuples have overlapping entities.
(3) CopyR BIBREF6: This model uses an encoder-decoder approach for joint extraction of entities and relations. It copies only the last token of an entity from the source sentence. Their best performing multi-decoder model is trained with a fixed number of decoders where each decoder extracts one tuple.
(4) HRL BIBREF11: This model uses a reinforcement learning (RL) algorithm with two levels of hierarchy for tuple extraction. A high-level RL finds the relation and a low-level RL identifies the two entities using a sequence tagging approach. This sequence tagging approach cannot always ensure extraction of exactly two entities.
(5) GraphR BIBREF14: This model considers each token in a sentence as a node in a graph, and edges connecting the nodes as relations between them. They use graph convolution network (GCN) to predict the relations of every edge and then filter out some of the relations.
(6) N-gram Attention BIBREF9: This model uses an encoder-decoder approach with N-gram attention mechanism for knowledge-base completion using distantly supervised data. The encoder uses the source tokens as its vocabulary and the decoder uses the entire Wikidata BIBREF15 entity IDs and relation IDs as its vocabulary. The encoder takes the source sentence as input and the decoder outputs the two entity IDs and relation ID for every tuple. During training, it uses the mapping of entity names and their Wikidata IDs of the entire Wikidata for proper alignment. Our task of extracting relation tuples with the raw entity names from a sentence is more challenging since entity names are not of fixed length. Our more generic approach is also helpful for extracting new entities which are not present in the existing knowledge bases such as Wikidata. We use their N-gram attention mechanism in our model to compare its performance with other attention models (Table TABREF17).
We use the same evaluation method used by BIBREF11 (BIBREF11) in their experiments. We consider the extracted tuples as a set and remove the duplicate tuples. An extracted tuple is considered as correct if the corresponding full entity names are correct and the relation is also correct. We report precision, recall, and F1 score for comparison.
Experiments ::: Experimental Results
Among the baselines, HRL achieves significantly higher F1 scores on the two datasets. We run their model and our models five times and report the median results in Table TABREF15. Scores of other baselines in Table TABREF15 are taken from previous published papers BIBREF6, BIBREF11, BIBREF14. Our WordDecoding (WDec) model achieves F1 scores that are $3.9\%$ and $4.1\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. Similarly, our PtrNetDecoding (PNDec) model achieves F1 scores that are $3.0\%$ and $1.3\%$ higher than HRL on the NYT29 and NYT24 datasets respectively. We perform a statistical significance test (t-test) under a bootstrap pairing between HRL and our models and see that the higher F1 scores achieved by our models are statistically significant ($p < 0.001$). Next, we combine the outputs of five runs of our models and five runs of HRL to build ensemble models. For a test instance, we include those tuples which are extracted in the majority ($\ge 3$) of the five runs. This ensemble mechanism increases the precision significantly on both datasets with a small improvement in recall as well. In the ensemble scenario, compared to HRL, WDec achieves $4.2\%$ and $3.5\%$ higher F1 scores and PNDec achieves $4.2\%$ and $2.9\%$ higher F1 scores on the NYT29 and NYT24 datasets respectively.
Analysis and Discussion ::: Ablation Studies
We include the performance of different attention mechanisms with our WordDecoding model, effects of our masking-based copy mechanism, and ablation results of three variants of the single attention mechanism with our PtrNetDecoding model in Table TABREF17. WordDecoding with single attention achieves the highest F1 score on both datasets. We also see that our copy mechanism improves F1 scores by around 4–7% in each attention mechanism with both datasets. PtrNetDecoding achieves the highest F1 scores when we combine the two attention mechanisms with respect to the previous hidden vector of the decoder LSTM ($\mathbf {h}_{t-1}^D$) and representation of all previously extracted tuples ($\mathbf {y}_{prev}$).
Analysis and Discussion ::: Performance Analysis
From Table TABREF15, we see that CopyR, HRL, and our models achieve significantly higher F1 scores on the NYT24 dataset than the NYT29 dataset. Both datasets have a similar set of relations and similar texts (NYT). So task-wise both datasets should pose a similar challenge. However, the F1 scores suggest that the NYT24 dataset is easier than NYT29. The reason is that NYT24 has around 72.0% of overlapping tuples between the training and test data (% of test tuples that appear in the training data with different source sentences). In contrast, NYT29 has only 41.7% of overlapping tuples. Due to the memorization power of deep neural networks, it can achieve much higher F1 score on NYT24. The difference between the F1 scores of WordDecoding and PtrNetDecoding on NYT24 is marginally higher than NYT29, since WordDecoding has more trainable parameters (about 27 million) than PtrNetDecoding (about 24.5 million) and NYT24 has very high tuple overlap. However, their ensemble versions achieve closer F1 scores on both datasets.
Despite achieving marginally lower F1 scores, the pointer network-based model can be considered more intuitive and suitable for this task. WordDecoding may not extract the special tokens and relation tokens at the right time steps, which is critical for finding the tuples from the generated sequence of words. PtrNetDecoding always extracts two entities of varying length and a relation for every tuple. We also observe that PtrNetDecoding is more than two times faster and takes one-third of the GPU memory of WordDecoding during training and inference. This speedup and smaller memory consumption are achieved due to the fewer number of decoding steps of PtrNetDecoding compared to WordDecoding. PtrNetDecoding extracts an entire tuple at each time step, whereas WordDecoding extracts just one word at each time step and so requires eight time steps on average to extract a tuple (assuming that the average length of an entity is two). The softmax operation at the projection layer of WordDecoding is applied across the entire vocabulary and the vocabulary size can be large (more than 40,000 for our datasets). In case of PtrNetDecoding, the softmax operation is applied across the sentence length (maximum of 100 in our experiments) and across the relation set (24 and 29 for our datasets). The costly softmax operation and the higher number of decoding time steps significantly increase the training and inference time for WordDecoding. The encoder-decoder model proposed by BIBREF9 (BIBREF9) faces a similar softmax-related problem as their target vocabulary contains the entire Wikidata entity IDs and relation IDs which is in the millions. HRL, which uses a deep reinforcement learning algorithm, takes around 8x more time to train than PtrNetDecoding with a similar GPU configuration. The speedup and smaller memory consumption will be useful when we move from sentence-level extraction to document-level extraction, since document length is much higher than sentence length and a document contains a higher number of tuples.
Analysis and Discussion ::: Error Analysis
The relation tuples extracted by a joint model can be erroneous for multiple reasons such as: (i) extracted entities are wrong; (ii) extracted relations are wrong; (iii) pairings of entities with relations are wrong. To see the effects of the first two reasons, we analyze the performance of HRL and our models on entity generation and relation generation separately. For entity generation, we only consider those entities which are part of some tuple. For relation generation, we only consider the relations of the tuples. We include the performance of our two models and HRL on entity generation and relation generation in Table TABREF20. Our proposed models perform better than HRL on both tasks. Comparing our two models, PtrNetDecoding performs better than WordDecoding on both tasks, although WordDecoding achieves higher F1 scores in tuple extraction. This suggests that PtrNetDecoding makes more errors while pairing the entities with relations. We further analyze the outputs of our models and HRL to determine the errors due to ordering of entities (Order), mismatch of the first entity (Ent1), and mismatch of the second entity (Ent2) in Table TABREF21. WordDecoding generates fewer errors than the other two models in all the categories and thus achieves the highest F1 scores on both datasets.
Related Work
Traditionally, researchers BIBREF0, BIBREF1, BIBREF2, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25 used a pipeline approach for relation tuple extraction where relations were identified using a classification network after all entities were detected. BIBREF26 (BIBREF26) used an encoder-decoder model to extract multiple relations present between two given entities.
Recently, some researchers BIBREF3, BIBREF4, BIBREF27, BIBREF28 tried to bring these two tasks closer together by sharing their parameters and optimizing them together. BIBREF5 (BIBREF5) used a sequence tagging scheme to jointly extract the entities and relations. BIBREF6 (BIBREF6) proposed an encoder-decoder model with copy mechanism to extract relation tuples with overlapping entities. BIBREF11 (BIBREF11) proposed a joint extraction model based on reinforcement learning (RL). BIBREF14 (BIBREF14) used a graph convolution network (GCN) where they treated each token in a sentence as a node in a graph and edges were considered as relations. BIBREF9 (BIBREF9) used an N-gram attention mechanism with an encoder-decoder model for completion of knowledge bases using distant supervised data.
Encoder-decoder models have been used for many NLP applications such as neural machine translation BIBREF29, BIBREF10, BIBREF30, sentence generation from structured data BIBREF31, BIBREF32, and open information extraction BIBREF33, BIBREF34. Pointer networks BIBREF35 have been used to extract a text span from text for tasks such as question answering BIBREF36, BIBREF37. For the first time, we use pointer networks with an encoder-decoder model to extract relation tuples from sentences.
Conclusion
Extracting relation tuples from sentences is a challenging task due to different length of entities, the presence of multiple tuples, and overlapping of entities among tuples. In this paper, we propose two novel approaches using encoder-decoder architecture to address this task. Experiments on the New York Times (NYT) corpus show that our proposed models achieve significantly improved new state-of-the-art F1 scores. As future work, we would like to explore our proposed models for a document-level tuple extraction task.
Acknowledgments
We would like to thank the anonymous reviewers for their valuable and constructive comments on this paper. | Our WordDecoding (WDec) model achieves F1 scores that are $3.9\%$ and $4.1\%$ higher than HRL on the NYT29 and NYT24 datasets respectively, In the ensemble scenario, compared to HRL, WDec achieves $4.2\%$ and $3.5\%$ higher F1 scores |
d32b6ac003cfe6277f8c2eebc7540605a60a3904 | d32b6ac003cfe6277f8c2eebc7540605a60a3904_0 | Q: what were the baselines?
Text: [block]I.1em
[block]i.1em
Learning to Rank Scientific Documents from the CrowdLearning to Rank Scientific Documents from the Crowd
-4
[1]1
Introduction
The number of biomedical research papers published has increased dramatically in recent years. As of October, 2016, PubMed houses over 26 million citations, with almost 1 million from the first 3 quarters of 2016 alone . It has become impossible for any one person to actually read all of the work being published. We require tools to help us determine which research articles would be most informative and related to a particular question or document. For example, a common task when reading articles is to find articles that are most related to another. Major research search engines offer such a “related articles” feature. However, we propose that instead of measuring relatedness by text-similarity measures, we build a model that is able to infer relatedness from the authors' judgments.
BIBREF0 consider two kinds of queries important to bibliographic information retrieval: the first is a search query written by the user and the second is a request for documents most similar to a document already judged relevant by the user. Such a query-by-document (or query-by-example) system has been implemented in the de facto scientific search engine PubMed—called Related Citation Search. BIBREF1 show that 19% of all PubMed searches performed by users have at least one click on a related article. Google Scholar provides a similar Related Articles system. Outside of bibliographic retrieval, query-by-document systems are commonly used for patent retrieval, Internet search, and plagiarism detection, amongst others. Most work in the area of query-by-document uses text-based similarity measures ( BIBREF2 , BIBREF3 , BIBREF4 ). However, scientific research is hypothesis driven and therefore we question whether text-based similarity alone is the best model for bibliographic retrieval. In this study we asked authors to rank documents by “closeness” to their work. The definition of “closeness” was left for the authors to interpret, as the goal is to model which documents the authors subjectively feel are closest to their own. Throughout the paper we will use “closeness” and “relatedness” interchangeably.
We found that researchers' ranking by closeness differs significantly from the ranking provided by a traditional IR system. Our contributions are three fold:
The principal ranking algorithms of query-by-document in bibliographic information retrieval rely mainly on text similarity measures ( BIBREF1 , BIBREF0 ). For example, the foundational work of BIBREF0 introduced the concept of a “document neighborhood” in which they pre-compute a text-similarity based distance between each pair of documents. When a user issues a query, first an initial set of related documents is retrieved. Then, the neighbors of each of those documents is retrieved, i.e., documents with the highest text similarity to those in the initial set. In a later work, BIBREF1 develop the PMRA algorithm for PubMed related article search. PMRA is an unsupervised probabilistic topic model that is trained to model “relatedness” between documents. BIBREF5 introduce the competing algorithm Find-Similar for this task, treating the full text of documents as a query and selecting related documents from the results.
Outside bibliographic IR, prior work in query-by-document includes patent retrieval ( BIBREF6 , BIBREF3 ), finding related documents given a manuscript ( BIBREF1 , BIBREF7 ), and web page search ( BIBREF8 , BIBREF9 ). Much of the work focuses on generating shorter queries from the lengthy document. For example, noun-phrase extraction has been used for extracting short, descriptive phrases from the original lengthy text ( BIBREF10 ). Topic models have been used to distill a document into a set of topics used to form query ( BIBREF11 ). BIBREF6 generated queries using the top TF*IDF weighted terms in each document. BIBREF4 suggested extracting phrasal concepts from a document, which are then used to generate queries. BIBREF2 combined query extraction and pseudo-relevance feedback for patent retrieval. BIBREF9 employ supervised machine learning model (i.e., Conditional Random Fields) ( BIBREF12 ) for query generation. BIBREF13 explored ontology to identify chemical concepts for queries.
There are also many biomedical-document specific search engines available. Many information retrieval systems focus on question answering systems such as those developed for the TREC Genomics Track ( BIBREF14 ) or BioASQ Question-Answer ( BIBREF15 ) competitions. Systems designed for question-answering use a combination of natural language processing techniques to identify biomedical entities, and then information retrieval systems to extract relevant answers to questions. Systems like those detailed in BIBREF16 can provide answers to yes/no biomedical questions with high precision. However what we propose differs from these systems in a fundamental way: given a specific document, suggest the most important documents that are related to it.
The body of work most related to ours is that of citation recommendation. The goal of citation recommendation is to suggest a small number of publications that can be used as high quality references for a particular article ( BIBREF17 , BIBREF1 ). Topic models have been used to rank articles based on the similarity of latent topic distribution ( BIBREF11 , BIBREF18 , BIBREF1 ). These models attempt to decompose a document into a few important keywords. Specifically, these models attempt to find a latent vector representation of a document that has a much smaller dimensionality than the document itself and compare the reduced dimension vectors.
Citation networks have also been explored for ranking articles by importance, i.e., authority ( BIBREF19 , BIBREF20 ). BIBREF17 introduced heterogeneous network models, called meta-path based models, to incorporate venues (the conference where a paper is published) and content (the term which links two articles, for citation recommendation). Another highly relevant work is BIBREF8 who decomposed a document to represent it with a compact vector, which is then used to measure the similarity with other documents. Note that we exclude the work of context-aware recommendation, which analyze each citation's local context, which is typically short and does not represent a full document.
One of the key contributions of our study is an innovative approach for automatically generating a query-by-document gold standard. Crowd-sourcing has generated large databases, including Wikipedia and Freebase. Recently, BIBREF21 concluded that unpaid participants performed better than paid participants for question answering. They attribute this to unpaid participants being more intrinsically motivated than the paid test takers: they performed the task for fun and already had knowledge about the subject being tested. In contrast, another study, BIBREF22 , compared unpaid workers found through Google Adwords (GA) to paid workers found through Amazon Mechanical Turk (AMT). They found that the paid participants from AMT outperform the unpaid ones. This is attributed to the paid workers being more willing to look up information they didn't know. In the bibliographic domain, authors of scientific publications have contributed annotations ( BIBREF23 ). They found that authors are more willing to annotate their own publications ( BIBREF23 ) than to annotate other publications ( BIBREF24 ) even though they are paid. In this work, our annotated dataset was created by the unpaid authors of the articles.
Benchmark Datasets
In order to develop and evaluate ranking algorithms we need a benchmark dataset. However, to the best of our knowledge, we know of no openly available benchmark dataset for bibliographic query-by-document systems. We therefore created such a benchmark dataset.
The creation of any benchmark dataset is a daunting labor-intensive task, and in particular, challenging in the scientific domain because one must master the technical jargon of a scientific article, and such experts are not easy to find when using traditional crowd-sourcing technologies (e.g., AMT). For our task, the ideal annotator for each of our articles are the authors themselves. The authors of a publication typically have a clear knowledge of the references they cite and their scientific importance to their publication, and therefore may be excellent judges for ranking the reference articles.
Given the full text of a scientific publication, we want to rank its citations according to the author's judgments. We collected recent publications from the open-access PLoS journals and asked the authors to rank by closeness five citations we selected from their paper. PLoS articles were selected because its journals cover a wide array of topics and the full text articles are available in XML format. We selected the most recent publications as previous work in crowd-sourcing annotation shows that authors' willingness to participate in an unpaid annotation task declines with the age of publication ( BIBREF23 ). We then extracted the abstract, citations, full text, authors, and corresponding author email address from each document. The titles and abstracts of the citations were retrieved from PubMed, and the cosine similarity between the PLoS abstract and the citation's abstract was calculated. We selected the top five most similar abstracts using TF*IDF weighted cosine similarity, shuffled their order, and emailed them to the corresponding author for annotation. We believe that ranking five articles (rather than the entire collection of the references) is a more manageable task for an author compared to asking them to rank all references. Because the documents to be annotated were selected based on text similarity, they also represent a challenging baseline for models based on text-similarity features. In total 416 authors were contacted, and 92 responded (22% response rate). Two responses were removed from the dataset for incomplete annotation.
We asked authors to rank documents by how “close to your work” they were. The definition of closeness was left to the discretion of the author. The dataset is composed of 90 annotated documents with 5 citations each ranked 1 to 5, where 1 is least relevant and 5 is most relevant for a total of 450 annotated citations.
Learning to Rank
Learning-to-rank is a technique for reordering the results returned from a search engine query. Generally, the initial query to a search engine is concerned more with recall than precision: the goal is to obtain a subset of potentially related documents from the corpus. Then, given this set of potentially related documents, learning-to-rank algorithms reorder the documents such that the most relevant documents appear at the top of the list. This process is illustrated in Figure FIGREF6 .
There are three basic types of learning-to-rank algorithms: point-wise, pair-wise, and list-wise. Point-wise algorithms assign a score to each retrieved document and rank them by their scores. Pair-wise algorithms turn learning-to-rank into a binary classification problem, obtaining a ranking by comparing each individual pair of documents. List-wise algorithms try to optimize an evaluation parameter over all queries in the dataset.
Support Vector Machine (SVM) ( BIBREF25 ) is a commonly used supervised classification algorithm that has shown good performance over a range of tasks. SVM can be thought of as a binary linear classifier where the goal is to maximize the size of the gap between the class-separating line and the points on either side of the line. This helps avoid over-fitting on the training data. SVMRank is a modification to SVM that assigns scores to each data point and allows the results to be ranked ( BIBREF26 ). We use SVMRank in the experiments below. SVMRank has previously been used in the task of document retrieval in ( BIBREF27 ) for a more traditional short query task and has been shown to be a top-performing system for ranking.
SVMRank is a point-wise learning-to-rank algorithm that returns scores for each document. We rank the documents by these scores. It is possible that sometimes two documents will have the same score, resulting in a tie. In this case, we give both documents the same rank, and then leave a gap in the ranking. For example, if documents 2 and 3 are tied, their ranked list will be [5, 3, 3, 2, 1].
Models are trained by randomly splitting the dataset into 70% training data and 30% test data. We apply a random sub-sampling approach where the dataset is randomly split, trained, and tested 100 times due to the relatively small size of the data. A model is learned for each split and a ranking is produced for each annotated document.
We test three different supervised models. The first supervised model uses only text similarity features, the second model uses all of the features, and the third model runs forward feature selection to select the best performing combination of features. We also test using two different models trained on two different datasets: one trained using the gold standard annotations, and another trained using the judgments based on text similarity that were used to select the citations to give to the authors.
We tested several different learning to rank algorithms for this work. We found in preliminary testing that SVMRank had the best performance, so it will be used in the following experiments.
Features
Each citation is turned into a feature vector representing the relationship between the published article and the citation. Four types of features are used: text similarity, citation count and location, age of the citation, and the number of times the citation has appeared in the literature (citation impact). Text similarity features measure the similarity of the words used in different parts of the document. In this work, we calculate the similarity between a document INLINEFORM0 and a document it cites INLINEFORM1 by transforming the their text into term vectors. For example, to calculate the similarity of the abstracts between INLINEFORM2 and INLINEFORM3 we transform the abstracts into two term vectors, INLINEFORM4 and INLINEFORM5 . The length of each of the term vectors is INLINEFORM6 . We then weight each word by its Term-frequency * Inverse-document frequency (TF*IDF) weight. TF*IDF is a technique to give higher weight to words that appear frequently in a document but infrequently in the corpus. Term frequency is simply the number of times that a word INLINEFORM7 appears in a document. Inverse-document frequency is the logarithmically-scaled fraction of documents in the corpus in which the word INLINEFORM8 appears. Or, more specifically: INLINEFORM9
where INLINEFORM0 is the total number of documents in the corpus, and the denominator is the number of documents in which a term INLINEFORM1 appears in the corpus INLINEFORM2 . Then, TF*IDF is defined as: INLINEFORM3
where INLINEFORM0 is a term, INLINEFORM1 is the document, and INLINEFORM2 is the corpus. For example, the word “the” may appear often in a document, but because it also appears in almost every document in the corpus it is not useful for calculating similarity, thus it receives a very low weight. However, a word such as “neurogenesis” may appear often in a document, but does not appear frequently in the corpus, and so it receives a high weight. The similarity between term vectors is then calculated using cosine similarity: INLINEFORM3
where INLINEFORM0 and INLINEFORM1 are two term vectors. The cosine similarity is a measure of the angle between the two vectors. The smaller the angle between the two vectors, i.e., the more similar they are, then the closer the value is to 1. Conversely, the more dissimilar the vectors, the closer the cosine similarity is to 0.
We calculate the text similarity between several different sections of the document INLINEFORM0 and the document it cites INLINEFORM1 . From the citing article INLINEFORM2 , we use the title, full text, abstract, the combined discussion/conclusion sections, and the 10 words on either side of the place in the document where the actual citation occurs. From the document it cites INLINEFORM3 we only use the title and the abstract due to limited availability of the full text. In this work we combine the discussion and conclusion sections of each document because some documents have only a conclusion section, others have only a discussion, and some have both. The similarity between each of these sections from the two documents is calculated and used as features in the model.
The age of the citation may be relevant to its importance. As a citation ages, we hypothesize that it is more likely to become a “foundational” citation rather than one that directly influenced the development of the article. Therefore more recent citations may be more likely relevant to the article. Similarly, “citation impact”, that is, the number of times a citation has appeared in the literature (as measured by Google Scholar) may be an indicator of whether or not an article is foundational rather than directly related. We hypothesize that the fewer times an article is cited in the literature, the more impact it had on the article at hand.
We also keep track of the number of times a citation is mentioned in both the full text and discussion/conclusion sections. We hypothesize that if a citation is mentioned multiple times, it is more important than citations that are mentioned only once. Further, citations that appear in the discussion/conclusion sections are more likely to be crucial to understanding the results. We normalize the counts of the citations by the total number of citations in that section. In total we select 15 features, shown in Table TABREF15 . The features are normalized within each document so that each of citation features is on a scale from 0 to 1, and are evenly distributed within that range. This is done because some of the features (such as years since citation) are unbounded.
Baseline Systems
We compare our system to a variety of baselines. (1) Rank by the number of times a citation is mentioned in the document. (2) Rank by the number of times the citation is cited in the literature (citation impact). (3) Rank using Google Scholar Related Articles. (4) Rank by the TF*IDF weighted cosine similarity. (5) Rank using a learning-to-rank model trained on text similarity rankings. The first two baseline systems are models where the values are ordered from highest to lowest to generate the ranking. The idea behind them is that the number of times a citation is mentioned in an article, or the citation impact may already be good indicators of their closeness. The text similarity model is trained using the same features and methods used by the annotation model, but trained using text similarity rankings instead of the author's judgments.
We also compare our rankings to those found on the popular scientific article search engine Google Scholar. Google Scholar is a “black box” IR system: they do not release details about which features they are using and how they judge relevance of documents. Google Scholar provides a “Related Articles” feature for each document in its index that shows the top 100 related documents for each article. To compare our rankings, we search through these related documents and record the ranking at which each of the citations we selected appeared. We scale these rankings such that the lowest ranked article from Google Scholar has the highest relevance ranking in our set. If the cited document does not appear in the set, we set its relevance-ranking equal to one below the lowest relevance ranking found.
Four comparisons are performed with the Google Scholar data. (1) We first train a model using our gold standard and see if we can predict Google Scholar's ranking. (2) We compare to a baseline of using Google Scholar's rankings to train and compare with their own rankings using our feature set. (3) Then we train a model using Google Scholar's rankings and try to predict our gold standard. (4) We compare it to the model trained on our gold standard to predict our gold standard.
Evaluation Measures
Normalized Discounted Cumulative Gain (NDCG) is a common measure for comparing a list of estimated document relevance judgments with a list of known judgments ( BIBREF28 ). To calculate NDCG we first calculate a ranking's Discounted Cumulative Gain (DCG) as: DISPLAYFORM0
where rel INLINEFORM0 is the relevance judgment at position INLINEFORM1 . Intuitively, DCG penalizes retrieval of documents that are not relevant (rel INLINEFORM2 ). However, DCG is an unbounded value. In order to compare the DCG between two models, we must normalize it. To do this, we use the ideal DCG (IDCG), i.e., the maximum possible DCG given the relevance judgments. The maximum possible DCG occurs when the relevance judgments are in the correct order. DISPLAYFORM0
The NDCG value is in the range of 0 to 1, where 0 means that no relevant documents were retrieved, and 1 means that the relevant documents were retrieved and in the correct order of their relevance judgments.
Kendall's INLINEFORM0 is a measure of the correlation between two ranked lists. It compares the number of concordant pairs with the number of discordant pairs between each list. A concordant pair is defined over two observations INLINEFORM1 and INLINEFORM2 . If INLINEFORM3 and INLINEFORM4 , then the pair at indices INLINEFORM5 is concordant, that is, the ranking at INLINEFORM6 in both ranking sets INLINEFORM7 and INLINEFORM8 agree with each other. Similarly, a pair INLINEFORM9 is discordant if INLINEFORM10 and INLINEFORM11 or INLINEFORM12 and INLINEFORM13 . Kendall's INLINEFORM14 is then defined as: DISPLAYFORM0
where C is the number of concordant pairs, D is the number of discordant pairs, and the denominator represents the total number of possible pairs. Thus, Kendall's INLINEFORM0 falls in the range of INLINEFORM1 , where -1 means that the ranked lists are perfectly negatively correlated, 0 means that they are not significantly correlated, and 1 means that the ranked lists are perfectly correlated. One downside of this measure is that it does not take into account where in the ranked list an error occurs. Information retrieval, in general, cares more about errors near the top of the list rather than errors near the bottom of the list.
Average-Precision INLINEFORM0 ( BIBREF29 ) (or INLINEFORM1 ) extends on Kendall's INLINEFORM2 by incorporating the position of errors. If an error occurs near the top of the list, then that is penalized heavier than an error occurring at the bottom of the list. To achieve this, INLINEFORM3 incorporates ideas from the popular Average Precision measure, were we calculate the precision at each index of the list and then average them together. INLINEFORM4 is defined as: DISPLAYFORM0
Intuitively, if an error occurs at the top of the list, then that error is propagated into each iteration of the summation, meaning that it's penalty is added multiple times. INLINEFORM0 's range is between -1 and 1, where -1 means the lists are perfectly negatively correlated, 0 means that they are not significantly correlated, and 1 means that they are perfectly correlated.
Forward Feature Selection
Forward feature selection was performed by iteratively testing each feature one at a time. The highest performing feature is kept in the model, and another sweep is done over the remaining features. This continues until all features have been selected. This approach allows us to explore the effect of combinations of features and the effect of having too many or too few features. It also allows us to evaluate which features and combinations of features are the most powerful.
Results
We first compare our gold standard to the baselines. A random baseline is provided for reference. Because all of the documents that we rank are relevant, NDCG will be fairly high simply by chance. We find that the number of times a document is mentioned in the annotated document is significantly better than the random baseline or the citation impact. The more times a document is mentioned in a paper, the more likely the author was to annotate it as important. Interestingly, we see a negative correlation with the citation impact. The more times a document is mentioned in the literature, the less likely it is to be important. These results are shown in Table TABREF14 .
Next we rank the raw values of the features and compare them to our gold standard to obtain a baseline (Table TABREF15 ). The best performing text similarity feature is the similarity between the abstract of the annotated document and the abstract of the cited document. However, the number of times that a cited document is mentioned in the text of the annotated document are also high-scoring features, especially in the INLINEFORM0 correlation coefficient. These results indicate that text similarity alone may not be a good measure for judging the rank of a document.
Next we test three different feature sets for our supervised learning-to-rank models. The model using only the text similarity features performs poorly: NDCG stays at baseline and the correlation measures are low. Models that incorporate information about the age, number of times a cited document was referenced, and the citation impact of that document in addition to the text similarity features significantly outperformed models that used only text similarity features INLINEFORM0 . Because INLINEFORM1 takes into account the position in the ranking of the errors, this indicates that the All Features model was able to better correctly place highly ranked documents above lower ranked ones. Similarly, because Kendall's INLINEFORM2 is an overall measure of correlation that does not take into account the position of errors, the higher value here means that more rankings were correctly placed. Interestingly, feature selection (which is optimized for NDCG) does not outperform the model using all of the features in terms of our correlation measures. The features chosen during forward feature selection are (1) the citation impact, (2) number of mentions in the full text, (3) text similarity between the annotated document's title and the referenced document's abstract, (4) the text similarity between the annotated document's discussion/conclusion section and the referenced document's title. These results are shown in Table TABREF16 . The models trained on the text similarity judgments perform worse than the models trained on the annotated data. However, in terms of both NDCG and the correlation measures, they perform significantly better than the random baseline.
Next we compare our model to Google Scholar's rankings. Using the ranking collected from Google Scholar, we build a training set to try to predict our authors' rankings. We find that Google Scholar performs similarly to the text-only features model. This indicates that the rankings we obtained from the authors are substantially different than the rankings that Google Scholar provides. Results appear in Table TABREF17 .
Discussion
We found that authors rank the references they cite substantially differently from rankings based on text-similarity. Our results show that decomposing a document into a set of features that is able to capture that difference is key. While text similarity is indeed important (as evidenced by the Similarity(a,a) feature in Table TABREF15 ), we also found that the number of times a document is referenced in the text and the number of times a document is referenced in the literature are also both important features (via feature selection). The more often a citation is mentioned in the text, the more likely it is to be important. This feature is often overlooked in article citation recommendation. We also found that recency is important: the age of the citation is negatively correlated with the rank. Newer citations are more likely to be directly important than older, more foundational citations. Additionally, the number of times a document is cited in the literature is negatively correlated with rank. This is likely due to highly cited documents being more foundational works; they may be older papers that are important to the field but not directly influential to the new work.
The model trained using the author's judgments does significantly better than the model trained using the text-similarity-based judgments. An error analysis was performed to find out why some of the rankings disagreed with the author's annotations. We found that in some cases our features were unable to capture the relationship: for example a biomedical document applying a model developed in another field to the dataset may use very different language to describe the model than the citation. Previous work adopting topic models to query document search may prove useful for such cases.
A small subset of features ended up performing as well as the full list of features. The number of times a citation was mentioned and the citation impact score in the literature ended up being two of the most important features. Indeed, without the citation-based features, the model performs as though it were trained with the text-similarity rankings. Feature engineering is a part of any learning-to-rank system, especially in domain-specific contexts. Citations are an integral feature of our dataset. For learning-to-rank to be applied to other datasets feature engineering must also occur to exploit the unique properties of those datasets. However, we show that combining the domain-specific features with more traditional text-based features does improve the model's scores over simply using the domain-specific features themselves.
Interestingly, citation impact and age of the citation are both negatively correlated with rank. We hypothesize that this is because both measures can be indicators of recency: a new publication is more likely to be directly influenced by more recent work. Many other related search tools, however, treat the citation impact as a positive feature of relatedness: documents with a higher citation impact appear higher on the list of related articles than those with lower citation impacts. This may be the opposite of what the user actually desires.
We also found that rankings from our text-similarity based IR system or Google Scholar's IR system were unable to rank documents by the authors' annotations as well as our system. In one sense, this is reasonable: the rankings coming from these systems were from a different system than the author annotations. However, in domain-specific IR, domain experts are the best judges. We built a system that exploits these expert judgments. The text similarity and Google Scholar models were able to do this to some extent, performing above the random baseline, but not on the level of our model.
Additionally, we observe that NDCG may not be the most appropriate measure for comparing short ranked lists where all of the documents are relevant to some degree. NDCG gives a lot of credit to relevant documents that occur in the highest ranks. However, all of the documents here are relevant, just to varying degrees. Thus, NDCG does not seem to be the most appropriate measure, as is evident in our scores. The correlation coefficients from Kendall's INLINEFORM0 and INLINEFORM1 seem to be far more appropriate for this case, as they are not concerned with relevance, only ranking.
One limitation of our work is that we selected a small set of references based on their similarities to the article that cites them. Ideally, we would have had authors rank all of their citations for us, but this would have been a daunting task for authors to perform. We chose to use the Google Scholar dataset in order to attempt to mitigate this: we obtain a ranking for the set of references from a system that is also ranking many other documents. The five citations selected by TF*IDF weighted cosine similarity represent a “hard” gold standard: we are attempting to rank documents that are known to all be relevant by their nature, and have high similarity with the text. Additionally, there are plethora of other, more expensive features we could explore to improve the model. Citation network features, phrasal concepts, and topic models could all be used to help improve our results, at the cost of computational complexity.
We have developed a model for fast related-document ranking based on crowd-sourced data. The model, data, and data collection software are all publicly available and can easily be used in future applications as an automatic search to help users find the most important citations given a particular document. The experimental setup is portable to other datasets with some feature engineering. We were able to identify that several domain-specific features were crucial to our model, and that we were able to improve on the results of simply using those features alone by adding more traditional features.
Query-by-document is a complicated and challenging task. We provide an approach with an easily obtained dataset and a computationally inexpensive model. By working with biomedical researchers we were able to build a system that ranks documents in a quantitatively different way than previous systems, and to provide a tool that helps researchers find related documents.
Acknowledgments
We would like to thank all of the authors who took the time to answer our citation ranking survey. This work is supported by National Institutes of Health with the grant number 1R01GM095476. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. | Rank by the number of times a citation is mentioned in the document, Rank by the number of times the citation is cited in the literature (citation impact). , Rank using Google Scholar Related Articles., Rank by the TF*IDF weighted cosine similarity. , ank using a learning-to-rank model trained on text similarity rankings |
d32b6ac003cfe6277f8c2eebc7540605a60a3904 | d32b6ac003cfe6277f8c2eebc7540605a60a3904_1 | Q: what were the baselines?
Text: [block]I.1em
[block]i.1em
Learning to Rank Scientific Documents from the CrowdLearning to Rank Scientific Documents from the Crowd
-4
[1]1
Introduction
The number of biomedical research papers published has increased dramatically in recent years. As of October, 2016, PubMed houses over 26 million citations, with almost 1 million from the first 3 quarters of 2016 alone . It has become impossible for any one person to actually read all of the work being published. We require tools to help us determine which research articles would be most informative and related to a particular question or document. For example, a common task when reading articles is to find articles that are most related to another. Major research search engines offer such a “related articles” feature. However, we propose that instead of measuring relatedness by text-similarity measures, we build a model that is able to infer relatedness from the authors' judgments.
BIBREF0 consider two kinds of queries important to bibliographic information retrieval: the first is a search query written by the user and the second is a request for documents most similar to a document already judged relevant by the user. Such a query-by-document (or query-by-example) system has been implemented in the de facto scientific search engine PubMed—called Related Citation Search. BIBREF1 show that 19% of all PubMed searches performed by users have at least one click on a related article. Google Scholar provides a similar Related Articles system. Outside of bibliographic retrieval, query-by-document systems are commonly used for patent retrieval, Internet search, and plagiarism detection, amongst others. Most work in the area of query-by-document uses text-based similarity measures ( BIBREF2 , BIBREF3 , BIBREF4 ). However, scientific research is hypothesis driven and therefore we question whether text-based similarity alone is the best model for bibliographic retrieval. In this study we asked authors to rank documents by “closeness” to their work. The definition of “closeness” was left for the authors to interpret, as the goal is to model which documents the authors subjectively feel are closest to their own. Throughout the paper we will use “closeness” and “relatedness” interchangeably.
We found that researchers' ranking by closeness differs significantly from the ranking provided by a traditional IR system. Our contributions are three fold:
The principal ranking algorithms of query-by-document in bibliographic information retrieval rely mainly on text similarity measures ( BIBREF1 , BIBREF0 ). For example, the foundational work of BIBREF0 introduced the concept of a “document neighborhood” in which they pre-compute a text-similarity based distance between each pair of documents. When a user issues a query, first an initial set of related documents is retrieved. Then, the neighbors of each of those documents is retrieved, i.e., documents with the highest text similarity to those in the initial set. In a later work, BIBREF1 develop the PMRA algorithm for PubMed related article search. PMRA is an unsupervised probabilistic topic model that is trained to model “relatedness” between documents. BIBREF5 introduce the competing algorithm Find-Similar for this task, treating the full text of documents as a query and selecting related documents from the results.
Outside bibliographic IR, prior work in query-by-document includes patent retrieval ( BIBREF6 , BIBREF3 ), finding related documents given a manuscript ( BIBREF1 , BIBREF7 ), and web page search ( BIBREF8 , BIBREF9 ). Much of the work focuses on generating shorter queries from the lengthy document. For example, noun-phrase extraction has been used for extracting short, descriptive phrases from the original lengthy text ( BIBREF10 ). Topic models have been used to distill a document into a set of topics used to form query ( BIBREF11 ). BIBREF6 generated queries using the top TF*IDF weighted terms in each document. BIBREF4 suggested extracting phrasal concepts from a document, which are then used to generate queries. BIBREF2 combined query extraction and pseudo-relevance feedback for patent retrieval. BIBREF9 employ supervised machine learning model (i.e., Conditional Random Fields) ( BIBREF12 ) for query generation. BIBREF13 explored ontology to identify chemical concepts for queries.
There are also many biomedical-document specific search engines available. Many information retrieval systems focus on question answering systems such as those developed for the TREC Genomics Track ( BIBREF14 ) or BioASQ Question-Answer ( BIBREF15 ) competitions. Systems designed for question-answering use a combination of natural language processing techniques to identify biomedical entities, and then information retrieval systems to extract relevant answers to questions. Systems like those detailed in BIBREF16 can provide answers to yes/no biomedical questions with high precision. However what we propose differs from these systems in a fundamental way: given a specific document, suggest the most important documents that are related to it.
The body of work most related to ours is that of citation recommendation. The goal of citation recommendation is to suggest a small number of publications that can be used as high quality references for a particular article ( BIBREF17 , BIBREF1 ). Topic models have been used to rank articles based on the similarity of latent topic distribution ( BIBREF11 , BIBREF18 , BIBREF1 ). These models attempt to decompose a document into a few important keywords. Specifically, these models attempt to find a latent vector representation of a document that has a much smaller dimensionality than the document itself and compare the reduced dimension vectors.
Citation networks have also been explored for ranking articles by importance, i.e., authority ( BIBREF19 , BIBREF20 ). BIBREF17 introduced heterogeneous network models, called meta-path based models, to incorporate venues (the conference where a paper is published) and content (the term which links two articles, for citation recommendation). Another highly relevant work is BIBREF8 who decomposed a document to represent it with a compact vector, which is then used to measure the similarity with other documents. Note that we exclude the work of context-aware recommendation, which analyze each citation's local context, which is typically short and does not represent a full document.
One of the key contributions of our study is an innovative approach for automatically generating a query-by-document gold standard. Crowd-sourcing has generated large databases, including Wikipedia and Freebase. Recently, BIBREF21 concluded that unpaid participants performed better than paid participants for question answering. They attribute this to unpaid participants being more intrinsically motivated than the paid test takers: they performed the task for fun and already had knowledge about the subject being tested. In contrast, another study, BIBREF22 , compared unpaid workers found through Google Adwords (GA) to paid workers found through Amazon Mechanical Turk (AMT). They found that the paid participants from AMT outperform the unpaid ones. This is attributed to the paid workers being more willing to look up information they didn't know. In the bibliographic domain, authors of scientific publications have contributed annotations ( BIBREF23 ). They found that authors are more willing to annotate their own publications ( BIBREF23 ) than to annotate other publications ( BIBREF24 ) even though they are paid. In this work, our annotated dataset was created by the unpaid authors of the articles.
Benchmark Datasets
In order to develop and evaluate ranking algorithms we need a benchmark dataset. However, to the best of our knowledge, we know of no openly available benchmark dataset for bibliographic query-by-document systems. We therefore created such a benchmark dataset.
The creation of any benchmark dataset is a daunting labor-intensive task, and in particular, challenging in the scientific domain because one must master the technical jargon of a scientific article, and such experts are not easy to find when using traditional crowd-sourcing technologies (e.g., AMT). For our task, the ideal annotator for each of our articles are the authors themselves. The authors of a publication typically have a clear knowledge of the references they cite and their scientific importance to their publication, and therefore may be excellent judges for ranking the reference articles.
Given the full text of a scientific publication, we want to rank its citations according to the author's judgments. We collected recent publications from the open-access PLoS journals and asked the authors to rank by closeness five citations we selected from their paper. PLoS articles were selected because its journals cover a wide array of topics and the full text articles are available in XML format. We selected the most recent publications as previous work in crowd-sourcing annotation shows that authors' willingness to participate in an unpaid annotation task declines with the age of publication ( BIBREF23 ). We then extracted the abstract, citations, full text, authors, and corresponding author email address from each document. The titles and abstracts of the citations were retrieved from PubMed, and the cosine similarity between the PLoS abstract and the citation's abstract was calculated. We selected the top five most similar abstracts using TF*IDF weighted cosine similarity, shuffled their order, and emailed them to the corresponding author for annotation. We believe that ranking five articles (rather than the entire collection of the references) is a more manageable task for an author compared to asking them to rank all references. Because the documents to be annotated were selected based on text similarity, they also represent a challenging baseline for models based on text-similarity features. In total 416 authors were contacted, and 92 responded (22% response rate). Two responses were removed from the dataset for incomplete annotation.
We asked authors to rank documents by how “close to your work” they were. The definition of closeness was left to the discretion of the author. The dataset is composed of 90 annotated documents with 5 citations each ranked 1 to 5, where 1 is least relevant and 5 is most relevant for a total of 450 annotated citations.
Learning to Rank
Learning-to-rank is a technique for reordering the results returned from a search engine query. Generally, the initial query to a search engine is concerned more with recall than precision: the goal is to obtain a subset of potentially related documents from the corpus. Then, given this set of potentially related documents, learning-to-rank algorithms reorder the documents such that the most relevant documents appear at the top of the list. This process is illustrated in Figure FIGREF6 .
There are three basic types of learning-to-rank algorithms: point-wise, pair-wise, and list-wise. Point-wise algorithms assign a score to each retrieved document and rank them by their scores. Pair-wise algorithms turn learning-to-rank into a binary classification problem, obtaining a ranking by comparing each individual pair of documents. List-wise algorithms try to optimize an evaluation parameter over all queries in the dataset.
Support Vector Machine (SVM) ( BIBREF25 ) is a commonly used supervised classification algorithm that has shown good performance over a range of tasks. SVM can be thought of as a binary linear classifier where the goal is to maximize the size of the gap between the class-separating line and the points on either side of the line. This helps avoid over-fitting on the training data. SVMRank is a modification to SVM that assigns scores to each data point and allows the results to be ranked ( BIBREF26 ). We use SVMRank in the experiments below. SVMRank has previously been used in the task of document retrieval in ( BIBREF27 ) for a more traditional short query task and has been shown to be a top-performing system for ranking.
SVMRank is a point-wise learning-to-rank algorithm that returns scores for each document. We rank the documents by these scores. It is possible that sometimes two documents will have the same score, resulting in a tie. In this case, we give both documents the same rank, and then leave a gap in the ranking. For example, if documents 2 and 3 are tied, their ranked list will be [5, 3, 3, 2, 1].
Models are trained by randomly splitting the dataset into 70% training data and 30% test data. We apply a random sub-sampling approach where the dataset is randomly split, trained, and tested 100 times due to the relatively small size of the data. A model is learned for each split and a ranking is produced for each annotated document.
We test three different supervised models. The first supervised model uses only text similarity features, the second model uses all of the features, and the third model runs forward feature selection to select the best performing combination of features. We also test using two different models trained on two different datasets: one trained using the gold standard annotations, and another trained using the judgments based on text similarity that were used to select the citations to give to the authors.
We tested several different learning to rank algorithms for this work. We found in preliminary testing that SVMRank had the best performance, so it will be used in the following experiments.
Features
Each citation is turned into a feature vector representing the relationship between the published article and the citation. Four types of features are used: text similarity, citation count and location, age of the citation, and the number of times the citation has appeared in the literature (citation impact). Text similarity features measure the similarity of the words used in different parts of the document. In this work, we calculate the similarity between a document INLINEFORM0 and a document it cites INLINEFORM1 by transforming the their text into term vectors. For example, to calculate the similarity of the abstracts between INLINEFORM2 and INLINEFORM3 we transform the abstracts into two term vectors, INLINEFORM4 and INLINEFORM5 . The length of each of the term vectors is INLINEFORM6 . We then weight each word by its Term-frequency * Inverse-document frequency (TF*IDF) weight. TF*IDF is a technique to give higher weight to words that appear frequently in a document but infrequently in the corpus. Term frequency is simply the number of times that a word INLINEFORM7 appears in a document. Inverse-document frequency is the logarithmically-scaled fraction of documents in the corpus in which the word INLINEFORM8 appears. Or, more specifically: INLINEFORM9
where INLINEFORM0 is the total number of documents in the corpus, and the denominator is the number of documents in which a term INLINEFORM1 appears in the corpus INLINEFORM2 . Then, TF*IDF is defined as: INLINEFORM3
where INLINEFORM0 is a term, INLINEFORM1 is the document, and INLINEFORM2 is the corpus. For example, the word “the” may appear often in a document, but because it also appears in almost every document in the corpus it is not useful for calculating similarity, thus it receives a very low weight. However, a word such as “neurogenesis” may appear often in a document, but does not appear frequently in the corpus, and so it receives a high weight. The similarity between term vectors is then calculated using cosine similarity: INLINEFORM3
where INLINEFORM0 and INLINEFORM1 are two term vectors. The cosine similarity is a measure of the angle between the two vectors. The smaller the angle between the two vectors, i.e., the more similar they are, then the closer the value is to 1. Conversely, the more dissimilar the vectors, the closer the cosine similarity is to 0.
We calculate the text similarity between several different sections of the document INLINEFORM0 and the document it cites INLINEFORM1 . From the citing article INLINEFORM2 , we use the title, full text, abstract, the combined discussion/conclusion sections, and the 10 words on either side of the place in the document where the actual citation occurs. From the document it cites INLINEFORM3 we only use the title and the abstract due to limited availability of the full text. In this work we combine the discussion and conclusion sections of each document because some documents have only a conclusion section, others have only a discussion, and some have both. The similarity between each of these sections from the two documents is calculated and used as features in the model.
The age of the citation may be relevant to its importance. As a citation ages, we hypothesize that it is more likely to become a “foundational” citation rather than one that directly influenced the development of the article. Therefore more recent citations may be more likely relevant to the article. Similarly, “citation impact”, that is, the number of times a citation has appeared in the literature (as measured by Google Scholar) may be an indicator of whether or not an article is foundational rather than directly related. We hypothesize that the fewer times an article is cited in the literature, the more impact it had on the article at hand.
We also keep track of the number of times a citation is mentioned in both the full text and discussion/conclusion sections. We hypothesize that if a citation is mentioned multiple times, it is more important than citations that are mentioned only once. Further, citations that appear in the discussion/conclusion sections are more likely to be crucial to understanding the results. We normalize the counts of the citations by the total number of citations in that section. In total we select 15 features, shown in Table TABREF15 . The features are normalized within each document so that each of citation features is on a scale from 0 to 1, and are evenly distributed within that range. This is done because some of the features (such as years since citation) are unbounded.
Baseline Systems
We compare our system to a variety of baselines. (1) Rank by the number of times a citation is mentioned in the document. (2) Rank by the number of times the citation is cited in the literature (citation impact). (3) Rank using Google Scholar Related Articles. (4) Rank by the TF*IDF weighted cosine similarity. (5) Rank using a learning-to-rank model trained on text similarity rankings. The first two baseline systems are models where the values are ordered from highest to lowest to generate the ranking. The idea behind them is that the number of times a citation is mentioned in an article, or the citation impact may already be good indicators of their closeness. The text similarity model is trained using the same features and methods used by the annotation model, but trained using text similarity rankings instead of the author's judgments.
We also compare our rankings to those found on the popular scientific article search engine Google Scholar. Google Scholar is a “black box” IR system: they do not release details about which features they are using and how they judge relevance of documents. Google Scholar provides a “Related Articles” feature for each document in its index that shows the top 100 related documents for each article. To compare our rankings, we search through these related documents and record the ranking at which each of the citations we selected appeared. We scale these rankings such that the lowest ranked article from Google Scholar has the highest relevance ranking in our set. If the cited document does not appear in the set, we set its relevance-ranking equal to one below the lowest relevance ranking found.
Four comparisons are performed with the Google Scholar data. (1) We first train a model using our gold standard and see if we can predict Google Scholar's ranking. (2) We compare to a baseline of using Google Scholar's rankings to train and compare with their own rankings using our feature set. (3) Then we train a model using Google Scholar's rankings and try to predict our gold standard. (4) We compare it to the model trained on our gold standard to predict our gold standard.
Evaluation Measures
Normalized Discounted Cumulative Gain (NDCG) is a common measure for comparing a list of estimated document relevance judgments with a list of known judgments ( BIBREF28 ). To calculate NDCG we first calculate a ranking's Discounted Cumulative Gain (DCG) as: DISPLAYFORM0
where rel INLINEFORM0 is the relevance judgment at position INLINEFORM1 . Intuitively, DCG penalizes retrieval of documents that are not relevant (rel INLINEFORM2 ). However, DCG is an unbounded value. In order to compare the DCG between two models, we must normalize it. To do this, we use the ideal DCG (IDCG), i.e., the maximum possible DCG given the relevance judgments. The maximum possible DCG occurs when the relevance judgments are in the correct order. DISPLAYFORM0
The NDCG value is in the range of 0 to 1, where 0 means that no relevant documents were retrieved, and 1 means that the relevant documents were retrieved and in the correct order of their relevance judgments.
Kendall's INLINEFORM0 is a measure of the correlation between two ranked lists. It compares the number of concordant pairs with the number of discordant pairs between each list. A concordant pair is defined over two observations INLINEFORM1 and INLINEFORM2 . If INLINEFORM3 and INLINEFORM4 , then the pair at indices INLINEFORM5 is concordant, that is, the ranking at INLINEFORM6 in both ranking sets INLINEFORM7 and INLINEFORM8 agree with each other. Similarly, a pair INLINEFORM9 is discordant if INLINEFORM10 and INLINEFORM11 or INLINEFORM12 and INLINEFORM13 . Kendall's INLINEFORM14 is then defined as: DISPLAYFORM0
where C is the number of concordant pairs, D is the number of discordant pairs, and the denominator represents the total number of possible pairs. Thus, Kendall's INLINEFORM0 falls in the range of INLINEFORM1 , where -1 means that the ranked lists are perfectly negatively correlated, 0 means that they are not significantly correlated, and 1 means that the ranked lists are perfectly correlated. One downside of this measure is that it does not take into account where in the ranked list an error occurs. Information retrieval, in general, cares more about errors near the top of the list rather than errors near the bottom of the list.
Average-Precision INLINEFORM0 ( BIBREF29 ) (or INLINEFORM1 ) extends on Kendall's INLINEFORM2 by incorporating the position of errors. If an error occurs near the top of the list, then that is penalized heavier than an error occurring at the bottom of the list. To achieve this, INLINEFORM3 incorporates ideas from the popular Average Precision measure, were we calculate the precision at each index of the list and then average them together. INLINEFORM4 is defined as: DISPLAYFORM0
Intuitively, if an error occurs at the top of the list, then that error is propagated into each iteration of the summation, meaning that it's penalty is added multiple times. INLINEFORM0 's range is between -1 and 1, where -1 means the lists are perfectly negatively correlated, 0 means that they are not significantly correlated, and 1 means that they are perfectly correlated.
Forward Feature Selection
Forward feature selection was performed by iteratively testing each feature one at a time. The highest performing feature is kept in the model, and another sweep is done over the remaining features. This continues until all features have been selected. This approach allows us to explore the effect of combinations of features and the effect of having too many or too few features. It also allows us to evaluate which features and combinations of features are the most powerful.
Results
We first compare our gold standard to the baselines. A random baseline is provided for reference. Because all of the documents that we rank are relevant, NDCG will be fairly high simply by chance. We find that the number of times a document is mentioned in the annotated document is significantly better than the random baseline or the citation impact. The more times a document is mentioned in a paper, the more likely the author was to annotate it as important. Interestingly, we see a negative correlation with the citation impact. The more times a document is mentioned in the literature, the less likely it is to be important. These results are shown in Table TABREF14 .
Next we rank the raw values of the features and compare them to our gold standard to obtain a baseline (Table TABREF15 ). The best performing text similarity feature is the similarity between the abstract of the annotated document and the abstract of the cited document. However, the number of times that a cited document is mentioned in the text of the annotated document are also high-scoring features, especially in the INLINEFORM0 correlation coefficient. These results indicate that text similarity alone may not be a good measure for judging the rank of a document.
Next we test three different feature sets for our supervised learning-to-rank models. The model using only the text similarity features performs poorly: NDCG stays at baseline and the correlation measures are low. Models that incorporate information about the age, number of times a cited document was referenced, and the citation impact of that document in addition to the text similarity features significantly outperformed models that used only text similarity features INLINEFORM0 . Because INLINEFORM1 takes into account the position in the ranking of the errors, this indicates that the All Features model was able to better correctly place highly ranked documents above lower ranked ones. Similarly, because Kendall's INLINEFORM2 is an overall measure of correlation that does not take into account the position of errors, the higher value here means that more rankings were correctly placed. Interestingly, feature selection (which is optimized for NDCG) does not outperform the model using all of the features in terms of our correlation measures. The features chosen during forward feature selection are (1) the citation impact, (2) number of mentions in the full text, (3) text similarity between the annotated document's title and the referenced document's abstract, (4) the text similarity between the annotated document's discussion/conclusion section and the referenced document's title. These results are shown in Table TABREF16 . The models trained on the text similarity judgments perform worse than the models trained on the annotated data. However, in terms of both NDCG and the correlation measures, they perform significantly better than the random baseline.
Next we compare our model to Google Scholar's rankings. Using the ranking collected from Google Scholar, we build a training set to try to predict our authors' rankings. We find that Google Scholar performs similarly to the text-only features model. This indicates that the rankings we obtained from the authors are substantially different than the rankings that Google Scholar provides. Results appear in Table TABREF17 .
Discussion
We found that authors rank the references they cite substantially differently from rankings based on text-similarity. Our results show that decomposing a document into a set of features that is able to capture that difference is key. While text similarity is indeed important (as evidenced by the Similarity(a,a) feature in Table TABREF15 ), we also found that the number of times a document is referenced in the text and the number of times a document is referenced in the literature are also both important features (via feature selection). The more often a citation is mentioned in the text, the more likely it is to be important. This feature is often overlooked in article citation recommendation. We also found that recency is important: the age of the citation is negatively correlated with the rank. Newer citations are more likely to be directly important than older, more foundational citations. Additionally, the number of times a document is cited in the literature is negatively correlated with rank. This is likely due to highly cited documents being more foundational works; they may be older papers that are important to the field but not directly influential to the new work.
The model trained using the author's judgments does significantly better than the model trained using the text-similarity-based judgments. An error analysis was performed to find out why some of the rankings disagreed with the author's annotations. We found that in some cases our features were unable to capture the relationship: for example a biomedical document applying a model developed in another field to the dataset may use very different language to describe the model than the citation. Previous work adopting topic models to query document search may prove useful for such cases.
A small subset of features ended up performing as well as the full list of features. The number of times a citation was mentioned and the citation impact score in the literature ended up being two of the most important features. Indeed, without the citation-based features, the model performs as though it were trained with the text-similarity rankings. Feature engineering is a part of any learning-to-rank system, especially in domain-specific contexts. Citations are an integral feature of our dataset. For learning-to-rank to be applied to other datasets feature engineering must also occur to exploit the unique properties of those datasets. However, we show that combining the domain-specific features with more traditional text-based features does improve the model's scores over simply using the domain-specific features themselves.
Interestingly, citation impact and age of the citation are both negatively correlated with rank. We hypothesize that this is because both measures can be indicators of recency: a new publication is more likely to be directly influenced by more recent work. Many other related search tools, however, treat the citation impact as a positive feature of relatedness: documents with a higher citation impact appear higher on the list of related articles than those with lower citation impacts. This may be the opposite of what the user actually desires.
We also found that rankings from our text-similarity based IR system or Google Scholar's IR system were unable to rank documents by the authors' annotations as well as our system. In one sense, this is reasonable: the rankings coming from these systems were from a different system than the author annotations. However, in domain-specific IR, domain experts are the best judges. We built a system that exploits these expert judgments. The text similarity and Google Scholar models were able to do this to some extent, performing above the random baseline, but not on the level of our model.
Additionally, we observe that NDCG may not be the most appropriate measure for comparing short ranked lists where all of the documents are relevant to some degree. NDCG gives a lot of credit to relevant documents that occur in the highest ranks. However, all of the documents here are relevant, just to varying degrees. Thus, NDCG does not seem to be the most appropriate measure, as is evident in our scores. The correlation coefficients from Kendall's INLINEFORM0 and INLINEFORM1 seem to be far more appropriate for this case, as they are not concerned with relevance, only ranking.
One limitation of our work is that we selected a small set of references based on their similarities to the article that cites them. Ideally, we would have had authors rank all of their citations for us, but this would have been a daunting task for authors to perform. We chose to use the Google Scholar dataset in order to attempt to mitigate this: we obtain a ranking for the set of references from a system that is also ranking many other documents. The five citations selected by TF*IDF weighted cosine similarity represent a “hard” gold standard: we are attempting to rank documents that are known to all be relevant by their nature, and have high similarity with the text. Additionally, there are plethora of other, more expensive features we could explore to improve the model. Citation network features, phrasal concepts, and topic models could all be used to help improve our results, at the cost of computational complexity.
We have developed a model for fast related-document ranking based on crowd-sourced data. The model, data, and data collection software are all publicly available and can easily be used in future applications as an automatic search to help users find the most important citations given a particular document. The experimental setup is portable to other datasets with some feature engineering. We were able to identify that several domain-specific features were crucial to our model, and that we were able to improve on the results of simply using those features alone by adding more traditional features.
Query-by-document is a complicated and challenging task. We provide an approach with an easily obtained dataset and a computationally inexpensive model. By working with biomedical researchers we were able to build a system that ranks documents in a quantitatively different way than previous systems, and to provide a tool that helps researchers find related documents.
Acknowledgments
We would like to thank all of the authors who took the time to answer our citation ranking survey. This work is supported by National Institutes of Health with the grant number 1R01GM095476. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. | (1) Rank by the number of times a citation is mentioned in the document., (2) Rank by the number of times the citation is cited in the literature (citation impact)., (3) Rank using Google Scholar Related Articles., (4) Rank by the TF*IDF weighted cosine similarity., (5) Rank using a learning-to-rank model trained on text similarity rankings. |
c10f38ee97ed80484c1a70b8ebba9b1fb149bc91 | c10f38ee97ed80484c1a70b8ebba9b1fb149bc91_0 | Q: what is the supervised model they developed?
Text: [block]I.1em
[block]i.1em
Learning to Rank Scientific Documents from the CrowdLearning to Rank Scientific Documents from the Crowd
-4
[1]1
Introduction
The number of biomedical research papers published has increased dramatically in recent years. As of October, 2016, PubMed houses over 26 million citations, with almost 1 million from the first 3 quarters of 2016 alone . It has become impossible for any one person to actually read all of the work being published. We require tools to help us determine which research articles would be most informative and related to a particular question or document. For example, a common task when reading articles is to find articles that are most related to another. Major research search engines offer such a “related articles” feature. However, we propose that instead of measuring relatedness by text-similarity measures, we build a model that is able to infer relatedness from the authors' judgments.
BIBREF0 consider two kinds of queries important to bibliographic information retrieval: the first is a search query written by the user and the second is a request for documents most similar to a document already judged relevant by the user. Such a query-by-document (or query-by-example) system has been implemented in the de facto scientific search engine PubMed—called Related Citation Search. BIBREF1 show that 19% of all PubMed searches performed by users have at least one click on a related article. Google Scholar provides a similar Related Articles system. Outside of bibliographic retrieval, query-by-document systems are commonly used for patent retrieval, Internet search, and plagiarism detection, amongst others. Most work in the area of query-by-document uses text-based similarity measures ( BIBREF2 , BIBREF3 , BIBREF4 ). However, scientific research is hypothesis driven and therefore we question whether text-based similarity alone is the best model for bibliographic retrieval. In this study we asked authors to rank documents by “closeness” to their work. The definition of “closeness” was left for the authors to interpret, as the goal is to model which documents the authors subjectively feel are closest to their own. Throughout the paper we will use “closeness” and “relatedness” interchangeably.
We found that researchers' ranking by closeness differs significantly from the ranking provided by a traditional IR system. Our contributions are three fold:
The principal ranking algorithms of query-by-document in bibliographic information retrieval rely mainly on text similarity measures ( BIBREF1 , BIBREF0 ). For example, the foundational work of BIBREF0 introduced the concept of a “document neighborhood” in which they pre-compute a text-similarity based distance between each pair of documents. When a user issues a query, first an initial set of related documents is retrieved. Then, the neighbors of each of those documents is retrieved, i.e., documents with the highest text similarity to those in the initial set. In a later work, BIBREF1 develop the PMRA algorithm for PubMed related article search. PMRA is an unsupervised probabilistic topic model that is trained to model “relatedness” between documents. BIBREF5 introduce the competing algorithm Find-Similar for this task, treating the full text of documents as a query and selecting related documents from the results.
Outside bibliographic IR, prior work in query-by-document includes patent retrieval ( BIBREF6 , BIBREF3 ), finding related documents given a manuscript ( BIBREF1 , BIBREF7 ), and web page search ( BIBREF8 , BIBREF9 ). Much of the work focuses on generating shorter queries from the lengthy document. For example, noun-phrase extraction has been used for extracting short, descriptive phrases from the original lengthy text ( BIBREF10 ). Topic models have been used to distill a document into a set of topics used to form query ( BIBREF11 ). BIBREF6 generated queries using the top TF*IDF weighted terms in each document. BIBREF4 suggested extracting phrasal concepts from a document, which are then used to generate queries. BIBREF2 combined query extraction and pseudo-relevance feedback for patent retrieval. BIBREF9 employ supervised machine learning model (i.e., Conditional Random Fields) ( BIBREF12 ) for query generation. BIBREF13 explored ontology to identify chemical concepts for queries.
There are also many biomedical-document specific search engines available. Many information retrieval systems focus on question answering systems such as those developed for the TREC Genomics Track ( BIBREF14 ) or BioASQ Question-Answer ( BIBREF15 ) competitions. Systems designed for question-answering use a combination of natural language processing techniques to identify biomedical entities, and then information retrieval systems to extract relevant answers to questions. Systems like those detailed in BIBREF16 can provide answers to yes/no biomedical questions with high precision. However what we propose differs from these systems in a fundamental way: given a specific document, suggest the most important documents that are related to it.
The body of work most related to ours is that of citation recommendation. The goal of citation recommendation is to suggest a small number of publications that can be used as high quality references for a particular article ( BIBREF17 , BIBREF1 ). Topic models have been used to rank articles based on the similarity of latent topic distribution ( BIBREF11 , BIBREF18 , BIBREF1 ). These models attempt to decompose a document into a few important keywords. Specifically, these models attempt to find a latent vector representation of a document that has a much smaller dimensionality than the document itself and compare the reduced dimension vectors.
Citation networks have also been explored for ranking articles by importance, i.e., authority ( BIBREF19 , BIBREF20 ). BIBREF17 introduced heterogeneous network models, called meta-path based models, to incorporate venues (the conference where a paper is published) and content (the term which links two articles, for citation recommendation). Another highly relevant work is BIBREF8 who decomposed a document to represent it with a compact vector, which is then used to measure the similarity with other documents. Note that we exclude the work of context-aware recommendation, which analyze each citation's local context, which is typically short and does not represent a full document.
One of the key contributions of our study is an innovative approach for automatically generating a query-by-document gold standard. Crowd-sourcing has generated large databases, including Wikipedia and Freebase. Recently, BIBREF21 concluded that unpaid participants performed better than paid participants for question answering. They attribute this to unpaid participants being more intrinsically motivated than the paid test takers: they performed the task for fun and already had knowledge about the subject being tested. In contrast, another study, BIBREF22 , compared unpaid workers found through Google Adwords (GA) to paid workers found through Amazon Mechanical Turk (AMT). They found that the paid participants from AMT outperform the unpaid ones. This is attributed to the paid workers being more willing to look up information they didn't know. In the bibliographic domain, authors of scientific publications have contributed annotations ( BIBREF23 ). They found that authors are more willing to annotate their own publications ( BIBREF23 ) than to annotate other publications ( BIBREF24 ) even though they are paid. In this work, our annotated dataset was created by the unpaid authors of the articles.
Benchmark Datasets
In order to develop and evaluate ranking algorithms we need a benchmark dataset. However, to the best of our knowledge, we know of no openly available benchmark dataset for bibliographic query-by-document systems. We therefore created such a benchmark dataset.
The creation of any benchmark dataset is a daunting labor-intensive task, and in particular, challenging in the scientific domain because one must master the technical jargon of a scientific article, and such experts are not easy to find when using traditional crowd-sourcing technologies (e.g., AMT). For our task, the ideal annotator for each of our articles are the authors themselves. The authors of a publication typically have a clear knowledge of the references they cite and their scientific importance to their publication, and therefore may be excellent judges for ranking the reference articles.
Given the full text of a scientific publication, we want to rank its citations according to the author's judgments. We collected recent publications from the open-access PLoS journals and asked the authors to rank by closeness five citations we selected from their paper. PLoS articles were selected because its journals cover a wide array of topics and the full text articles are available in XML format. We selected the most recent publications as previous work in crowd-sourcing annotation shows that authors' willingness to participate in an unpaid annotation task declines with the age of publication ( BIBREF23 ). We then extracted the abstract, citations, full text, authors, and corresponding author email address from each document. The titles and abstracts of the citations were retrieved from PubMed, and the cosine similarity between the PLoS abstract and the citation's abstract was calculated. We selected the top five most similar abstracts using TF*IDF weighted cosine similarity, shuffled their order, and emailed them to the corresponding author for annotation. We believe that ranking five articles (rather than the entire collection of the references) is a more manageable task for an author compared to asking them to rank all references. Because the documents to be annotated were selected based on text similarity, they also represent a challenging baseline for models based on text-similarity features. In total 416 authors were contacted, and 92 responded (22% response rate). Two responses were removed from the dataset for incomplete annotation.
We asked authors to rank documents by how “close to your work” they were. The definition of closeness was left to the discretion of the author. The dataset is composed of 90 annotated documents with 5 citations each ranked 1 to 5, where 1 is least relevant and 5 is most relevant for a total of 450 annotated citations.
Learning to Rank
Learning-to-rank is a technique for reordering the results returned from a search engine query. Generally, the initial query to a search engine is concerned more with recall than precision: the goal is to obtain a subset of potentially related documents from the corpus. Then, given this set of potentially related documents, learning-to-rank algorithms reorder the documents such that the most relevant documents appear at the top of the list. This process is illustrated in Figure FIGREF6 .
There are three basic types of learning-to-rank algorithms: point-wise, pair-wise, and list-wise. Point-wise algorithms assign a score to each retrieved document and rank them by their scores. Pair-wise algorithms turn learning-to-rank into a binary classification problem, obtaining a ranking by comparing each individual pair of documents. List-wise algorithms try to optimize an evaluation parameter over all queries in the dataset.
Support Vector Machine (SVM) ( BIBREF25 ) is a commonly used supervised classification algorithm that has shown good performance over a range of tasks. SVM can be thought of as a binary linear classifier where the goal is to maximize the size of the gap between the class-separating line and the points on either side of the line. This helps avoid over-fitting on the training data. SVMRank is a modification to SVM that assigns scores to each data point and allows the results to be ranked ( BIBREF26 ). We use SVMRank in the experiments below. SVMRank has previously been used in the task of document retrieval in ( BIBREF27 ) for a more traditional short query task and has been shown to be a top-performing system for ranking.
SVMRank is a point-wise learning-to-rank algorithm that returns scores for each document. We rank the documents by these scores. It is possible that sometimes two documents will have the same score, resulting in a tie. In this case, we give both documents the same rank, and then leave a gap in the ranking. For example, if documents 2 and 3 are tied, their ranked list will be [5, 3, 3, 2, 1].
Models are trained by randomly splitting the dataset into 70% training data and 30% test data. We apply a random sub-sampling approach where the dataset is randomly split, trained, and tested 100 times due to the relatively small size of the data. A model is learned for each split and a ranking is produced for each annotated document.
We test three different supervised models. The first supervised model uses only text similarity features, the second model uses all of the features, and the third model runs forward feature selection to select the best performing combination of features. We also test using two different models trained on two different datasets: one trained using the gold standard annotations, and another trained using the judgments based on text similarity that were used to select the citations to give to the authors.
We tested several different learning to rank algorithms for this work. We found in preliminary testing that SVMRank had the best performance, so it will be used in the following experiments.
Features
Each citation is turned into a feature vector representing the relationship between the published article and the citation. Four types of features are used: text similarity, citation count and location, age of the citation, and the number of times the citation has appeared in the literature (citation impact). Text similarity features measure the similarity of the words used in different parts of the document. In this work, we calculate the similarity between a document INLINEFORM0 and a document it cites INLINEFORM1 by transforming the their text into term vectors. For example, to calculate the similarity of the abstracts between INLINEFORM2 and INLINEFORM3 we transform the abstracts into two term vectors, INLINEFORM4 and INLINEFORM5 . The length of each of the term vectors is INLINEFORM6 . We then weight each word by its Term-frequency * Inverse-document frequency (TF*IDF) weight. TF*IDF is a technique to give higher weight to words that appear frequently in a document but infrequently in the corpus. Term frequency is simply the number of times that a word INLINEFORM7 appears in a document. Inverse-document frequency is the logarithmically-scaled fraction of documents in the corpus in which the word INLINEFORM8 appears. Or, more specifically: INLINEFORM9
where INLINEFORM0 is the total number of documents in the corpus, and the denominator is the number of documents in which a term INLINEFORM1 appears in the corpus INLINEFORM2 . Then, TF*IDF is defined as: INLINEFORM3
where INLINEFORM0 is a term, INLINEFORM1 is the document, and INLINEFORM2 is the corpus. For example, the word “the” may appear often in a document, but because it also appears in almost every document in the corpus it is not useful for calculating similarity, thus it receives a very low weight. However, a word such as “neurogenesis” may appear often in a document, but does not appear frequently in the corpus, and so it receives a high weight. The similarity between term vectors is then calculated using cosine similarity: INLINEFORM3
where INLINEFORM0 and INLINEFORM1 are two term vectors. The cosine similarity is a measure of the angle between the two vectors. The smaller the angle between the two vectors, i.e., the more similar they are, then the closer the value is to 1. Conversely, the more dissimilar the vectors, the closer the cosine similarity is to 0.
We calculate the text similarity between several different sections of the document INLINEFORM0 and the document it cites INLINEFORM1 . From the citing article INLINEFORM2 , we use the title, full text, abstract, the combined discussion/conclusion sections, and the 10 words on either side of the place in the document where the actual citation occurs. From the document it cites INLINEFORM3 we only use the title and the abstract due to limited availability of the full text. In this work we combine the discussion and conclusion sections of each document because some documents have only a conclusion section, others have only a discussion, and some have both. The similarity between each of these sections from the two documents is calculated and used as features in the model.
The age of the citation may be relevant to its importance. As a citation ages, we hypothesize that it is more likely to become a “foundational” citation rather than one that directly influenced the development of the article. Therefore more recent citations may be more likely relevant to the article. Similarly, “citation impact”, that is, the number of times a citation has appeared in the literature (as measured by Google Scholar) may be an indicator of whether or not an article is foundational rather than directly related. We hypothesize that the fewer times an article is cited in the literature, the more impact it had on the article at hand.
We also keep track of the number of times a citation is mentioned in both the full text and discussion/conclusion sections. We hypothesize that if a citation is mentioned multiple times, it is more important than citations that are mentioned only once. Further, citations that appear in the discussion/conclusion sections are more likely to be crucial to understanding the results. We normalize the counts of the citations by the total number of citations in that section. In total we select 15 features, shown in Table TABREF15 . The features are normalized within each document so that each of citation features is on a scale from 0 to 1, and are evenly distributed within that range. This is done because some of the features (such as years since citation) are unbounded.
Baseline Systems
We compare our system to a variety of baselines. (1) Rank by the number of times a citation is mentioned in the document. (2) Rank by the number of times the citation is cited in the literature (citation impact). (3) Rank using Google Scholar Related Articles. (4) Rank by the TF*IDF weighted cosine similarity. (5) Rank using a learning-to-rank model trained on text similarity rankings. The first two baseline systems are models where the values are ordered from highest to lowest to generate the ranking. The idea behind them is that the number of times a citation is mentioned in an article, or the citation impact may already be good indicators of their closeness. The text similarity model is trained using the same features and methods used by the annotation model, but trained using text similarity rankings instead of the author's judgments.
We also compare our rankings to those found on the popular scientific article search engine Google Scholar. Google Scholar is a “black box” IR system: they do not release details about which features they are using and how they judge relevance of documents. Google Scholar provides a “Related Articles” feature for each document in its index that shows the top 100 related documents for each article. To compare our rankings, we search through these related documents and record the ranking at which each of the citations we selected appeared. We scale these rankings such that the lowest ranked article from Google Scholar has the highest relevance ranking in our set. If the cited document does not appear in the set, we set its relevance-ranking equal to one below the lowest relevance ranking found.
Four comparisons are performed with the Google Scholar data. (1) We first train a model using our gold standard and see if we can predict Google Scholar's ranking. (2) We compare to a baseline of using Google Scholar's rankings to train and compare with their own rankings using our feature set. (3) Then we train a model using Google Scholar's rankings and try to predict our gold standard. (4) We compare it to the model trained on our gold standard to predict our gold standard.
Evaluation Measures
Normalized Discounted Cumulative Gain (NDCG) is a common measure for comparing a list of estimated document relevance judgments with a list of known judgments ( BIBREF28 ). To calculate NDCG we first calculate a ranking's Discounted Cumulative Gain (DCG) as: DISPLAYFORM0
where rel INLINEFORM0 is the relevance judgment at position INLINEFORM1 . Intuitively, DCG penalizes retrieval of documents that are not relevant (rel INLINEFORM2 ). However, DCG is an unbounded value. In order to compare the DCG between two models, we must normalize it. To do this, we use the ideal DCG (IDCG), i.e., the maximum possible DCG given the relevance judgments. The maximum possible DCG occurs when the relevance judgments are in the correct order. DISPLAYFORM0
The NDCG value is in the range of 0 to 1, where 0 means that no relevant documents were retrieved, and 1 means that the relevant documents were retrieved and in the correct order of their relevance judgments.
Kendall's INLINEFORM0 is a measure of the correlation between two ranked lists. It compares the number of concordant pairs with the number of discordant pairs between each list. A concordant pair is defined over two observations INLINEFORM1 and INLINEFORM2 . If INLINEFORM3 and INLINEFORM4 , then the pair at indices INLINEFORM5 is concordant, that is, the ranking at INLINEFORM6 in both ranking sets INLINEFORM7 and INLINEFORM8 agree with each other. Similarly, a pair INLINEFORM9 is discordant if INLINEFORM10 and INLINEFORM11 or INLINEFORM12 and INLINEFORM13 . Kendall's INLINEFORM14 is then defined as: DISPLAYFORM0
where C is the number of concordant pairs, D is the number of discordant pairs, and the denominator represents the total number of possible pairs. Thus, Kendall's INLINEFORM0 falls in the range of INLINEFORM1 , where -1 means that the ranked lists are perfectly negatively correlated, 0 means that they are not significantly correlated, and 1 means that the ranked lists are perfectly correlated. One downside of this measure is that it does not take into account where in the ranked list an error occurs. Information retrieval, in general, cares more about errors near the top of the list rather than errors near the bottom of the list.
Average-Precision INLINEFORM0 ( BIBREF29 ) (or INLINEFORM1 ) extends on Kendall's INLINEFORM2 by incorporating the position of errors. If an error occurs near the top of the list, then that is penalized heavier than an error occurring at the bottom of the list. To achieve this, INLINEFORM3 incorporates ideas from the popular Average Precision measure, were we calculate the precision at each index of the list and then average them together. INLINEFORM4 is defined as: DISPLAYFORM0
Intuitively, if an error occurs at the top of the list, then that error is propagated into each iteration of the summation, meaning that it's penalty is added multiple times. INLINEFORM0 's range is between -1 and 1, where -1 means the lists are perfectly negatively correlated, 0 means that they are not significantly correlated, and 1 means that they are perfectly correlated.
Forward Feature Selection
Forward feature selection was performed by iteratively testing each feature one at a time. The highest performing feature is kept in the model, and another sweep is done over the remaining features. This continues until all features have been selected. This approach allows us to explore the effect of combinations of features and the effect of having too many or too few features. It also allows us to evaluate which features and combinations of features are the most powerful.
Results
We first compare our gold standard to the baselines. A random baseline is provided for reference. Because all of the documents that we rank are relevant, NDCG will be fairly high simply by chance. We find that the number of times a document is mentioned in the annotated document is significantly better than the random baseline or the citation impact. The more times a document is mentioned in a paper, the more likely the author was to annotate it as important. Interestingly, we see a negative correlation with the citation impact. The more times a document is mentioned in the literature, the less likely it is to be important. These results are shown in Table TABREF14 .
Next we rank the raw values of the features and compare them to our gold standard to obtain a baseline (Table TABREF15 ). The best performing text similarity feature is the similarity between the abstract of the annotated document and the abstract of the cited document. However, the number of times that a cited document is mentioned in the text of the annotated document are also high-scoring features, especially in the INLINEFORM0 correlation coefficient. These results indicate that text similarity alone may not be a good measure for judging the rank of a document.
Next we test three different feature sets for our supervised learning-to-rank models. The model using only the text similarity features performs poorly: NDCG stays at baseline and the correlation measures are low. Models that incorporate information about the age, number of times a cited document was referenced, and the citation impact of that document in addition to the text similarity features significantly outperformed models that used only text similarity features INLINEFORM0 . Because INLINEFORM1 takes into account the position in the ranking of the errors, this indicates that the All Features model was able to better correctly place highly ranked documents above lower ranked ones. Similarly, because Kendall's INLINEFORM2 is an overall measure of correlation that does not take into account the position of errors, the higher value here means that more rankings were correctly placed. Interestingly, feature selection (which is optimized for NDCG) does not outperform the model using all of the features in terms of our correlation measures. The features chosen during forward feature selection are (1) the citation impact, (2) number of mentions in the full text, (3) text similarity between the annotated document's title and the referenced document's abstract, (4) the text similarity between the annotated document's discussion/conclusion section and the referenced document's title. These results are shown in Table TABREF16 . The models trained on the text similarity judgments perform worse than the models trained on the annotated data. However, in terms of both NDCG and the correlation measures, they perform significantly better than the random baseline.
Next we compare our model to Google Scholar's rankings. Using the ranking collected from Google Scholar, we build a training set to try to predict our authors' rankings. We find that Google Scholar performs similarly to the text-only features model. This indicates that the rankings we obtained from the authors are substantially different than the rankings that Google Scholar provides. Results appear in Table TABREF17 .
Discussion
We found that authors rank the references they cite substantially differently from rankings based on text-similarity. Our results show that decomposing a document into a set of features that is able to capture that difference is key. While text similarity is indeed important (as evidenced by the Similarity(a,a) feature in Table TABREF15 ), we also found that the number of times a document is referenced in the text and the number of times a document is referenced in the literature are also both important features (via feature selection). The more often a citation is mentioned in the text, the more likely it is to be important. This feature is often overlooked in article citation recommendation. We also found that recency is important: the age of the citation is negatively correlated with the rank. Newer citations are more likely to be directly important than older, more foundational citations. Additionally, the number of times a document is cited in the literature is negatively correlated with rank. This is likely due to highly cited documents being more foundational works; they may be older papers that are important to the field but not directly influential to the new work.
The model trained using the author's judgments does significantly better than the model trained using the text-similarity-based judgments. An error analysis was performed to find out why some of the rankings disagreed with the author's annotations. We found that in some cases our features were unable to capture the relationship: for example a biomedical document applying a model developed in another field to the dataset may use very different language to describe the model than the citation. Previous work adopting topic models to query document search may prove useful for such cases.
A small subset of features ended up performing as well as the full list of features. The number of times a citation was mentioned and the citation impact score in the literature ended up being two of the most important features. Indeed, without the citation-based features, the model performs as though it were trained with the text-similarity rankings. Feature engineering is a part of any learning-to-rank system, especially in domain-specific contexts. Citations are an integral feature of our dataset. For learning-to-rank to be applied to other datasets feature engineering must also occur to exploit the unique properties of those datasets. However, we show that combining the domain-specific features with more traditional text-based features does improve the model's scores over simply using the domain-specific features themselves.
Interestingly, citation impact and age of the citation are both negatively correlated with rank. We hypothesize that this is because both measures can be indicators of recency: a new publication is more likely to be directly influenced by more recent work. Many other related search tools, however, treat the citation impact as a positive feature of relatedness: documents with a higher citation impact appear higher on the list of related articles than those with lower citation impacts. This may be the opposite of what the user actually desires.
We also found that rankings from our text-similarity based IR system or Google Scholar's IR system were unable to rank documents by the authors' annotations as well as our system. In one sense, this is reasonable: the rankings coming from these systems were from a different system than the author annotations. However, in domain-specific IR, domain experts are the best judges. We built a system that exploits these expert judgments. The text similarity and Google Scholar models were able to do this to some extent, performing above the random baseline, but not on the level of our model.
Additionally, we observe that NDCG may not be the most appropriate measure for comparing short ranked lists where all of the documents are relevant to some degree. NDCG gives a lot of credit to relevant documents that occur in the highest ranks. However, all of the documents here are relevant, just to varying degrees. Thus, NDCG does not seem to be the most appropriate measure, as is evident in our scores. The correlation coefficients from Kendall's INLINEFORM0 and INLINEFORM1 seem to be far more appropriate for this case, as they are not concerned with relevance, only ranking.
One limitation of our work is that we selected a small set of references based on their similarities to the article that cites them. Ideally, we would have had authors rank all of their citations for us, but this would have been a daunting task for authors to perform. We chose to use the Google Scholar dataset in order to attempt to mitigate this: we obtain a ranking for the set of references from a system that is also ranking many other documents. The five citations selected by TF*IDF weighted cosine similarity represent a “hard” gold standard: we are attempting to rank documents that are known to all be relevant by their nature, and have high similarity with the text. Additionally, there are plethora of other, more expensive features we could explore to improve the model. Citation network features, phrasal concepts, and topic models could all be used to help improve our results, at the cost of computational complexity.
We have developed a model for fast related-document ranking based on crowd-sourced data. The model, data, and data collection software are all publicly available and can easily be used in future applications as an automatic search to help users find the most important citations given a particular document. The experimental setup is portable to other datasets with some feature engineering. We were able to identify that several domain-specific features were crucial to our model, and that we were able to improve on the results of simply using those features alone by adding more traditional features.
Query-by-document is a complicated and challenging task. We provide an approach with an easily obtained dataset and a computationally inexpensive model. By working with biomedical researchers we were able to build a system that ranks documents in a quantitatively different way than previous systems, and to provide a tool that helps researchers find related documents.
Acknowledgments
We would like to thank all of the authors who took the time to answer our citation ranking survey. This work is supported by National Institutes of Health with the grant number 1R01GM095476. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. | SVMRank |
340501f23ddc0abe344a239193abbaaab938cc3a | 340501f23ddc0abe344a239193abbaaab938cc3a_0 | Q: what is the size of this built corpus?
Text: [block]I.1em
[block]i.1em
Learning to Rank Scientific Documents from the CrowdLearning to Rank Scientific Documents from the Crowd
-4
[1]1
Introduction
The number of biomedical research papers published has increased dramatically in recent years. As of October, 2016, PubMed houses over 26 million citations, with almost 1 million from the first 3 quarters of 2016 alone . It has become impossible for any one person to actually read all of the work being published. We require tools to help us determine which research articles would be most informative and related to a particular question or document. For example, a common task when reading articles is to find articles that are most related to another. Major research search engines offer such a “related articles” feature. However, we propose that instead of measuring relatedness by text-similarity measures, we build a model that is able to infer relatedness from the authors' judgments.
BIBREF0 consider two kinds of queries important to bibliographic information retrieval: the first is a search query written by the user and the second is a request for documents most similar to a document already judged relevant by the user. Such a query-by-document (or query-by-example) system has been implemented in the de facto scientific search engine PubMed—called Related Citation Search. BIBREF1 show that 19% of all PubMed searches performed by users have at least one click on a related article. Google Scholar provides a similar Related Articles system. Outside of bibliographic retrieval, query-by-document systems are commonly used for patent retrieval, Internet search, and plagiarism detection, amongst others. Most work in the area of query-by-document uses text-based similarity measures ( BIBREF2 , BIBREF3 , BIBREF4 ). However, scientific research is hypothesis driven and therefore we question whether text-based similarity alone is the best model for bibliographic retrieval. In this study we asked authors to rank documents by “closeness” to their work. The definition of “closeness” was left for the authors to interpret, as the goal is to model which documents the authors subjectively feel are closest to their own. Throughout the paper we will use “closeness” and “relatedness” interchangeably.
We found that researchers' ranking by closeness differs significantly from the ranking provided by a traditional IR system. Our contributions are three fold:
The principal ranking algorithms of query-by-document in bibliographic information retrieval rely mainly on text similarity measures ( BIBREF1 , BIBREF0 ). For example, the foundational work of BIBREF0 introduced the concept of a “document neighborhood” in which they pre-compute a text-similarity based distance between each pair of documents. When a user issues a query, first an initial set of related documents is retrieved. Then, the neighbors of each of those documents is retrieved, i.e., documents with the highest text similarity to those in the initial set. In a later work, BIBREF1 develop the PMRA algorithm for PubMed related article search. PMRA is an unsupervised probabilistic topic model that is trained to model “relatedness” between documents. BIBREF5 introduce the competing algorithm Find-Similar for this task, treating the full text of documents as a query and selecting related documents from the results.
Outside bibliographic IR, prior work in query-by-document includes patent retrieval ( BIBREF6 , BIBREF3 ), finding related documents given a manuscript ( BIBREF1 , BIBREF7 ), and web page search ( BIBREF8 , BIBREF9 ). Much of the work focuses on generating shorter queries from the lengthy document. For example, noun-phrase extraction has been used for extracting short, descriptive phrases from the original lengthy text ( BIBREF10 ). Topic models have been used to distill a document into a set of topics used to form query ( BIBREF11 ). BIBREF6 generated queries using the top TF*IDF weighted terms in each document. BIBREF4 suggested extracting phrasal concepts from a document, which are then used to generate queries. BIBREF2 combined query extraction and pseudo-relevance feedback for patent retrieval. BIBREF9 employ supervised machine learning model (i.e., Conditional Random Fields) ( BIBREF12 ) for query generation. BIBREF13 explored ontology to identify chemical concepts for queries.
There are also many biomedical-document specific search engines available. Many information retrieval systems focus on question answering systems such as those developed for the TREC Genomics Track ( BIBREF14 ) or BioASQ Question-Answer ( BIBREF15 ) competitions. Systems designed for question-answering use a combination of natural language processing techniques to identify biomedical entities, and then information retrieval systems to extract relevant answers to questions. Systems like those detailed in BIBREF16 can provide answers to yes/no biomedical questions with high precision. However what we propose differs from these systems in a fundamental way: given a specific document, suggest the most important documents that are related to it.
The body of work most related to ours is that of citation recommendation. The goal of citation recommendation is to suggest a small number of publications that can be used as high quality references for a particular article ( BIBREF17 , BIBREF1 ). Topic models have been used to rank articles based on the similarity of latent topic distribution ( BIBREF11 , BIBREF18 , BIBREF1 ). These models attempt to decompose a document into a few important keywords. Specifically, these models attempt to find a latent vector representation of a document that has a much smaller dimensionality than the document itself and compare the reduced dimension vectors.
Citation networks have also been explored for ranking articles by importance, i.e., authority ( BIBREF19 , BIBREF20 ). BIBREF17 introduced heterogeneous network models, called meta-path based models, to incorporate venues (the conference where a paper is published) and content (the term which links two articles, for citation recommendation). Another highly relevant work is BIBREF8 who decomposed a document to represent it with a compact vector, which is then used to measure the similarity with other documents. Note that we exclude the work of context-aware recommendation, which analyze each citation's local context, which is typically short and does not represent a full document.
One of the key contributions of our study is an innovative approach for automatically generating a query-by-document gold standard. Crowd-sourcing has generated large databases, including Wikipedia and Freebase. Recently, BIBREF21 concluded that unpaid participants performed better than paid participants for question answering. They attribute this to unpaid participants being more intrinsically motivated than the paid test takers: they performed the task for fun and already had knowledge about the subject being tested. In contrast, another study, BIBREF22 , compared unpaid workers found through Google Adwords (GA) to paid workers found through Amazon Mechanical Turk (AMT). They found that the paid participants from AMT outperform the unpaid ones. This is attributed to the paid workers being more willing to look up information they didn't know. In the bibliographic domain, authors of scientific publications have contributed annotations ( BIBREF23 ). They found that authors are more willing to annotate their own publications ( BIBREF23 ) than to annotate other publications ( BIBREF24 ) even though they are paid. In this work, our annotated dataset was created by the unpaid authors of the articles.
Benchmark Datasets
In order to develop and evaluate ranking algorithms we need a benchmark dataset. However, to the best of our knowledge, we know of no openly available benchmark dataset for bibliographic query-by-document systems. We therefore created such a benchmark dataset.
The creation of any benchmark dataset is a daunting labor-intensive task, and in particular, challenging in the scientific domain because one must master the technical jargon of a scientific article, and such experts are not easy to find when using traditional crowd-sourcing technologies (e.g., AMT). For our task, the ideal annotator for each of our articles are the authors themselves. The authors of a publication typically have a clear knowledge of the references they cite and their scientific importance to their publication, and therefore may be excellent judges for ranking the reference articles.
Given the full text of a scientific publication, we want to rank its citations according to the author's judgments. We collected recent publications from the open-access PLoS journals and asked the authors to rank by closeness five citations we selected from their paper. PLoS articles were selected because its journals cover a wide array of topics and the full text articles are available in XML format. We selected the most recent publications as previous work in crowd-sourcing annotation shows that authors' willingness to participate in an unpaid annotation task declines with the age of publication ( BIBREF23 ). We then extracted the abstract, citations, full text, authors, and corresponding author email address from each document. The titles and abstracts of the citations were retrieved from PubMed, and the cosine similarity between the PLoS abstract and the citation's abstract was calculated. We selected the top five most similar abstracts using TF*IDF weighted cosine similarity, shuffled their order, and emailed them to the corresponding author for annotation. We believe that ranking five articles (rather than the entire collection of the references) is a more manageable task for an author compared to asking them to rank all references. Because the documents to be annotated were selected based on text similarity, they also represent a challenging baseline for models based on text-similarity features. In total 416 authors were contacted, and 92 responded (22% response rate). Two responses were removed from the dataset for incomplete annotation.
We asked authors to rank documents by how “close to your work” they were. The definition of closeness was left to the discretion of the author. The dataset is composed of 90 annotated documents with 5 citations each ranked 1 to 5, where 1 is least relevant and 5 is most relevant for a total of 450 annotated citations.
Learning to Rank
Learning-to-rank is a technique for reordering the results returned from a search engine query. Generally, the initial query to a search engine is concerned more with recall than precision: the goal is to obtain a subset of potentially related documents from the corpus. Then, given this set of potentially related documents, learning-to-rank algorithms reorder the documents such that the most relevant documents appear at the top of the list. This process is illustrated in Figure FIGREF6 .
There are three basic types of learning-to-rank algorithms: point-wise, pair-wise, and list-wise. Point-wise algorithms assign a score to each retrieved document and rank them by their scores. Pair-wise algorithms turn learning-to-rank into a binary classification problem, obtaining a ranking by comparing each individual pair of documents. List-wise algorithms try to optimize an evaluation parameter over all queries in the dataset.
Support Vector Machine (SVM) ( BIBREF25 ) is a commonly used supervised classification algorithm that has shown good performance over a range of tasks. SVM can be thought of as a binary linear classifier where the goal is to maximize the size of the gap between the class-separating line and the points on either side of the line. This helps avoid over-fitting on the training data. SVMRank is a modification to SVM that assigns scores to each data point and allows the results to be ranked ( BIBREF26 ). We use SVMRank in the experiments below. SVMRank has previously been used in the task of document retrieval in ( BIBREF27 ) for a more traditional short query task and has been shown to be a top-performing system for ranking.
SVMRank is a point-wise learning-to-rank algorithm that returns scores for each document. We rank the documents by these scores. It is possible that sometimes two documents will have the same score, resulting in a tie. In this case, we give both documents the same rank, and then leave a gap in the ranking. For example, if documents 2 and 3 are tied, their ranked list will be [5, 3, 3, 2, 1].
Models are trained by randomly splitting the dataset into 70% training data and 30% test data. We apply a random sub-sampling approach where the dataset is randomly split, trained, and tested 100 times due to the relatively small size of the data. A model is learned for each split and a ranking is produced for each annotated document.
We test three different supervised models. The first supervised model uses only text similarity features, the second model uses all of the features, and the third model runs forward feature selection to select the best performing combination of features. We also test using two different models trained on two different datasets: one trained using the gold standard annotations, and another trained using the judgments based on text similarity that were used to select the citations to give to the authors.
We tested several different learning to rank algorithms for this work. We found in preliminary testing that SVMRank had the best performance, so it will be used in the following experiments.
Features
Each citation is turned into a feature vector representing the relationship between the published article and the citation. Four types of features are used: text similarity, citation count and location, age of the citation, and the number of times the citation has appeared in the literature (citation impact). Text similarity features measure the similarity of the words used in different parts of the document. In this work, we calculate the similarity between a document INLINEFORM0 and a document it cites INLINEFORM1 by transforming the their text into term vectors. For example, to calculate the similarity of the abstracts between INLINEFORM2 and INLINEFORM3 we transform the abstracts into two term vectors, INLINEFORM4 and INLINEFORM5 . The length of each of the term vectors is INLINEFORM6 . We then weight each word by its Term-frequency * Inverse-document frequency (TF*IDF) weight. TF*IDF is a technique to give higher weight to words that appear frequently in a document but infrequently in the corpus. Term frequency is simply the number of times that a word INLINEFORM7 appears in a document. Inverse-document frequency is the logarithmically-scaled fraction of documents in the corpus in which the word INLINEFORM8 appears. Or, more specifically: INLINEFORM9
where INLINEFORM0 is the total number of documents in the corpus, and the denominator is the number of documents in which a term INLINEFORM1 appears in the corpus INLINEFORM2 . Then, TF*IDF is defined as: INLINEFORM3
where INLINEFORM0 is a term, INLINEFORM1 is the document, and INLINEFORM2 is the corpus. For example, the word “the” may appear often in a document, but because it also appears in almost every document in the corpus it is not useful for calculating similarity, thus it receives a very low weight. However, a word such as “neurogenesis” may appear often in a document, but does not appear frequently in the corpus, and so it receives a high weight. The similarity between term vectors is then calculated using cosine similarity: INLINEFORM3
where INLINEFORM0 and INLINEFORM1 are two term vectors. The cosine similarity is a measure of the angle between the two vectors. The smaller the angle between the two vectors, i.e., the more similar they are, then the closer the value is to 1. Conversely, the more dissimilar the vectors, the closer the cosine similarity is to 0.
We calculate the text similarity between several different sections of the document INLINEFORM0 and the document it cites INLINEFORM1 . From the citing article INLINEFORM2 , we use the title, full text, abstract, the combined discussion/conclusion sections, and the 10 words on either side of the place in the document where the actual citation occurs. From the document it cites INLINEFORM3 we only use the title and the abstract due to limited availability of the full text. In this work we combine the discussion and conclusion sections of each document because some documents have only a conclusion section, others have only a discussion, and some have both. The similarity between each of these sections from the two documents is calculated and used as features in the model.
The age of the citation may be relevant to its importance. As a citation ages, we hypothesize that it is more likely to become a “foundational” citation rather than one that directly influenced the development of the article. Therefore more recent citations may be more likely relevant to the article. Similarly, “citation impact”, that is, the number of times a citation has appeared in the literature (as measured by Google Scholar) may be an indicator of whether or not an article is foundational rather than directly related. We hypothesize that the fewer times an article is cited in the literature, the more impact it had on the article at hand.
We also keep track of the number of times a citation is mentioned in both the full text and discussion/conclusion sections. We hypothesize that if a citation is mentioned multiple times, it is more important than citations that are mentioned only once. Further, citations that appear in the discussion/conclusion sections are more likely to be crucial to understanding the results. We normalize the counts of the citations by the total number of citations in that section. In total we select 15 features, shown in Table TABREF15 . The features are normalized within each document so that each of citation features is on a scale from 0 to 1, and are evenly distributed within that range. This is done because some of the features (such as years since citation) are unbounded.
Baseline Systems
We compare our system to a variety of baselines. (1) Rank by the number of times a citation is mentioned in the document. (2) Rank by the number of times the citation is cited in the literature (citation impact). (3) Rank using Google Scholar Related Articles. (4) Rank by the TF*IDF weighted cosine similarity. (5) Rank using a learning-to-rank model trained on text similarity rankings. The first two baseline systems are models where the values are ordered from highest to lowest to generate the ranking. The idea behind them is that the number of times a citation is mentioned in an article, or the citation impact may already be good indicators of their closeness. The text similarity model is trained using the same features and methods used by the annotation model, but trained using text similarity rankings instead of the author's judgments.
We also compare our rankings to those found on the popular scientific article search engine Google Scholar. Google Scholar is a “black box” IR system: they do not release details about which features they are using and how they judge relevance of documents. Google Scholar provides a “Related Articles” feature for each document in its index that shows the top 100 related documents for each article. To compare our rankings, we search through these related documents and record the ranking at which each of the citations we selected appeared. We scale these rankings such that the lowest ranked article from Google Scholar has the highest relevance ranking in our set. If the cited document does not appear in the set, we set its relevance-ranking equal to one below the lowest relevance ranking found.
Four comparisons are performed with the Google Scholar data. (1) We first train a model using our gold standard and see if we can predict Google Scholar's ranking. (2) We compare to a baseline of using Google Scholar's rankings to train and compare with their own rankings using our feature set. (3) Then we train a model using Google Scholar's rankings and try to predict our gold standard. (4) We compare it to the model trained on our gold standard to predict our gold standard.
Evaluation Measures
Normalized Discounted Cumulative Gain (NDCG) is a common measure for comparing a list of estimated document relevance judgments with a list of known judgments ( BIBREF28 ). To calculate NDCG we first calculate a ranking's Discounted Cumulative Gain (DCG) as: DISPLAYFORM0
where rel INLINEFORM0 is the relevance judgment at position INLINEFORM1 . Intuitively, DCG penalizes retrieval of documents that are not relevant (rel INLINEFORM2 ). However, DCG is an unbounded value. In order to compare the DCG between two models, we must normalize it. To do this, we use the ideal DCG (IDCG), i.e., the maximum possible DCG given the relevance judgments. The maximum possible DCG occurs when the relevance judgments are in the correct order. DISPLAYFORM0
The NDCG value is in the range of 0 to 1, where 0 means that no relevant documents were retrieved, and 1 means that the relevant documents were retrieved and in the correct order of their relevance judgments.
Kendall's INLINEFORM0 is a measure of the correlation between two ranked lists. It compares the number of concordant pairs with the number of discordant pairs between each list. A concordant pair is defined over two observations INLINEFORM1 and INLINEFORM2 . If INLINEFORM3 and INLINEFORM4 , then the pair at indices INLINEFORM5 is concordant, that is, the ranking at INLINEFORM6 in both ranking sets INLINEFORM7 and INLINEFORM8 agree with each other. Similarly, a pair INLINEFORM9 is discordant if INLINEFORM10 and INLINEFORM11 or INLINEFORM12 and INLINEFORM13 . Kendall's INLINEFORM14 is then defined as: DISPLAYFORM0
where C is the number of concordant pairs, D is the number of discordant pairs, and the denominator represents the total number of possible pairs. Thus, Kendall's INLINEFORM0 falls in the range of INLINEFORM1 , where -1 means that the ranked lists are perfectly negatively correlated, 0 means that they are not significantly correlated, and 1 means that the ranked lists are perfectly correlated. One downside of this measure is that it does not take into account where in the ranked list an error occurs. Information retrieval, in general, cares more about errors near the top of the list rather than errors near the bottom of the list.
Average-Precision INLINEFORM0 ( BIBREF29 ) (or INLINEFORM1 ) extends on Kendall's INLINEFORM2 by incorporating the position of errors. If an error occurs near the top of the list, then that is penalized heavier than an error occurring at the bottom of the list. To achieve this, INLINEFORM3 incorporates ideas from the popular Average Precision measure, were we calculate the precision at each index of the list and then average them together. INLINEFORM4 is defined as: DISPLAYFORM0
Intuitively, if an error occurs at the top of the list, then that error is propagated into each iteration of the summation, meaning that it's penalty is added multiple times. INLINEFORM0 's range is between -1 and 1, where -1 means the lists are perfectly negatively correlated, 0 means that they are not significantly correlated, and 1 means that they are perfectly correlated.
Forward Feature Selection
Forward feature selection was performed by iteratively testing each feature one at a time. The highest performing feature is kept in the model, and another sweep is done over the remaining features. This continues until all features have been selected. This approach allows us to explore the effect of combinations of features and the effect of having too many or too few features. It also allows us to evaluate which features and combinations of features are the most powerful.
Results
We first compare our gold standard to the baselines. A random baseline is provided for reference. Because all of the documents that we rank are relevant, NDCG will be fairly high simply by chance. We find that the number of times a document is mentioned in the annotated document is significantly better than the random baseline or the citation impact. The more times a document is mentioned in a paper, the more likely the author was to annotate it as important. Interestingly, we see a negative correlation with the citation impact. The more times a document is mentioned in the literature, the less likely it is to be important. These results are shown in Table TABREF14 .
Next we rank the raw values of the features and compare them to our gold standard to obtain a baseline (Table TABREF15 ). The best performing text similarity feature is the similarity between the abstract of the annotated document and the abstract of the cited document. However, the number of times that a cited document is mentioned in the text of the annotated document are also high-scoring features, especially in the INLINEFORM0 correlation coefficient. These results indicate that text similarity alone may not be a good measure for judging the rank of a document.
Next we test three different feature sets for our supervised learning-to-rank models. The model using only the text similarity features performs poorly: NDCG stays at baseline and the correlation measures are low. Models that incorporate information about the age, number of times a cited document was referenced, and the citation impact of that document in addition to the text similarity features significantly outperformed models that used only text similarity features INLINEFORM0 . Because INLINEFORM1 takes into account the position in the ranking of the errors, this indicates that the All Features model was able to better correctly place highly ranked documents above lower ranked ones. Similarly, because Kendall's INLINEFORM2 is an overall measure of correlation that does not take into account the position of errors, the higher value here means that more rankings were correctly placed. Interestingly, feature selection (which is optimized for NDCG) does not outperform the model using all of the features in terms of our correlation measures. The features chosen during forward feature selection are (1) the citation impact, (2) number of mentions in the full text, (3) text similarity between the annotated document's title and the referenced document's abstract, (4) the text similarity between the annotated document's discussion/conclusion section and the referenced document's title. These results are shown in Table TABREF16 . The models trained on the text similarity judgments perform worse than the models trained on the annotated data. However, in terms of both NDCG and the correlation measures, they perform significantly better than the random baseline.
Next we compare our model to Google Scholar's rankings. Using the ranking collected from Google Scholar, we build a training set to try to predict our authors' rankings. We find that Google Scholar performs similarly to the text-only features model. This indicates that the rankings we obtained from the authors are substantially different than the rankings that Google Scholar provides. Results appear in Table TABREF17 .
Discussion
We found that authors rank the references they cite substantially differently from rankings based on text-similarity. Our results show that decomposing a document into a set of features that is able to capture that difference is key. While text similarity is indeed important (as evidenced by the Similarity(a,a) feature in Table TABREF15 ), we also found that the number of times a document is referenced in the text and the number of times a document is referenced in the literature are also both important features (via feature selection). The more often a citation is mentioned in the text, the more likely it is to be important. This feature is often overlooked in article citation recommendation. We also found that recency is important: the age of the citation is negatively correlated with the rank. Newer citations are more likely to be directly important than older, more foundational citations. Additionally, the number of times a document is cited in the literature is negatively correlated with rank. This is likely due to highly cited documents being more foundational works; they may be older papers that are important to the field but not directly influential to the new work.
The model trained using the author's judgments does significantly better than the model trained using the text-similarity-based judgments. An error analysis was performed to find out why some of the rankings disagreed with the author's annotations. We found that in some cases our features were unable to capture the relationship: for example a biomedical document applying a model developed in another field to the dataset may use very different language to describe the model than the citation. Previous work adopting topic models to query document search may prove useful for such cases.
A small subset of features ended up performing as well as the full list of features. The number of times a citation was mentioned and the citation impact score in the literature ended up being two of the most important features. Indeed, without the citation-based features, the model performs as though it were trained with the text-similarity rankings. Feature engineering is a part of any learning-to-rank system, especially in domain-specific contexts. Citations are an integral feature of our dataset. For learning-to-rank to be applied to other datasets feature engineering must also occur to exploit the unique properties of those datasets. However, we show that combining the domain-specific features with more traditional text-based features does improve the model's scores over simply using the domain-specific features themselves.
Interestingly, citation impact and age of the citation are both negatively correlated with rank. We hypothesize that this is because both measures can be indicators of recency: a new publication is more likely to be directly influenced by more recent work. Many other related search tools, however, treat the citation impact as a positive feature of relatedness: documents with a higher citation impact appear higher on the list of related articles than those with lower citation impacts. This may be the opposite of what the user actually desires.
We also found that rankings from our text-similarity based IR system or Google Scholar's IR system were unable to rank documents by the authors' annotations as well as our system. In one sense, this is reasonable: the rankings coming from these systems were from a different system than the author annotations. However, in domain-specific IR, domain experts are the best judges. We built a system that exploits these expert judgments. The text similarity and Google Scholar models were able to do this to some extent, performing above the random baseline, but not on the level of our model.
Additionally, we observe that NDCG may not be the most appropriate measure for comparing short ranked lists where all of the documents are relevant to some degree. NDCG gives a lot of credit to relevant documents that occur in the highest ranks. However, all of the documents here are relevant, just to varying degrees. Thus, NDCG does not seem to be the most appropriate measure, as is evident in our scores. The correlation coefficients from Kendall's INLINEFORM0 and INLINEFORM1 seem to be far more appropriate for this case, as they are not concerned with relevance, only ranking.
One limitation of our work is that we selected a small set of references based on their similarities to the article that cites them. Ideally, we would have had authors rank all of their citations for us, but this would have been a daunting task for authors to perform. We chose to use the Google Scholar dataset in order to attempt to mitigate this: we obtain a ranking for the set of references from a system that is also ranking many other documents. The five citations selected by TF*IDF weighted cosine similarity represent a “hard” gold standard: we are attempting to rank documents that are known to all be relevant by their nature, and have high similarity with the text. Additionally, there are plethora of other, more expensive features we could explore to improve the model. Citation network features, phrasal concepts, and topic models could all be used to help improve our results, at the cost of computational complexity.
We have developed a model for fast related-document ranking based on crowd-sourced data. The model, data, and data collection software are all publicly available and can easily be used in future applications as an automatic search to help users find the most important citations given a particular document. The experimental setup is portable to other datasets with some feature engineering. We were able to identify that several domain-specific features were crucial to our model, and that we were able to improve on the results of simply using those features alone by adding more traditional features.
Query-by-document is a complicated and challenging task. We provide an approach with an easily obtained dataset and a computationally inexpensive model. By working with biomedical researchers we were able to build a system that ranks documents in a quantitatively different way than previous systems, and to provide a tool that helps researchers find related documents.
Acknowledgments
We would like to thank all of the authors who took the time to answer our citation ranking survey. This work is supported by National Institutes of Health with the grant number 1R01GM095476. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. | 90 annotated documents with 5 citations each ranked 1 to 5, where 1 is least relevant and 5 is most relevant for a total of 450 annotated citations |
fbb85cbd41de6d2818e77e8f8d4b91e431931faa | fbb85cbd41de6d2818e77e8f8d4b91e431931faa_0 | Q: what crowdsourcing platform is used?
Text: [block]I.1em
[block]i.1em
Learning to Rank Scientific Documents from the CrowdLearning to Rank Scientific Documents from the Crowd
-4
[1]1
Introduction
The number of biomedical research papers published has increased dramatically in recent years. As of October, 2016, PubMed houses over 26 million citations, with almost 1 million from the first 3 quarters of 2016 alone . It has become impossible for any one person to actually read all of the work being published. We require tools to help us determine which research articles would be most informative and related to a particular question or document. For example, a common task when reading articles is to find articles that are most related to another. Major research search engines offer such a “related articles” feature. However, we propose that instead of measuring relatedness by text-similarity measures, we build a model that is able to infer relatedness from the authors' judgments.
BIBREF0 consider two kinds of queries important to bibliographic information retrieval: the first is a search query written by the user and the second is a request for documents most similar to a document already judged relevant by the user. Such a query-by-document (or query-by-example) system has been implemented in the de facto scientific search engine PubMed—called Related Citation Search. BIBREF1 show that 19% of all PubMed searches performed by users have at least one click on a related article. Google Scholar provides a similar Related Articles system. Outside of bibliographic retrieval, query-by-document systems are commonly used for patent retrieval, Internet search, and plagiarism detection, amongst others. Most work in the area of query-by-document uses text-based similarity measures ( BIBREF2 , BIBREF3 , BIBREF4 ). However, scientific research is hypothesis driven and therefore we question whether text-based similarity alone is the best model for bibliographic retrieval. In this study we asked authors to rank documents by “closeness” to their work. The definition of “closeness” was left for the authors to interpret, as the goal is to model which documents the authors subjectively feel are closest to their own. Throughout the paper we will use “closeness” and “relatedness” interchangeably.
We found that researchers' ranking by closeness differs significantly from the ranking provided by a traditional IR system. Our contributions are three fold:
The principal ranking algorithms of query-by-document in bibliographic information retrieval rely mainly on text similarity measures ( BIBREF1 , BIBREF0 ). For example, the foundational work of BIBREF0 introduced the concept of a “document neighborhood” in which they pre-compute a text-similarity based distance between each pair of documents. When a user issues a query, first an initial set of related documents is retrieved. Then, the neighbors of each of those documents is retrieved, i.e., documents with the highest text similarity to those in the initial set. In a later work, BIBREF1 develop the PMRA algorithm for PubMed related article search. PMRA is an unsupervised probabilistic topic model that is trained to model “relatedness” between documents. BIBREF5 introduce the competing algorithm Find-Similar for this task, treating the full text of documents as a query and selecting related documents from the results.
Outside bibliographic IR, prior work in query-by-document includes patent retrieval ( BIBREF6 , BIBREF3 ), finding related documents given a manuscript ( BIBREF1 , BIBREF7 ), and web page search ( BIBREF8 , BIBREF9 ). Much of the work focuses on generating shorter queries from the lengthy document. For example, noun-phrase extraction has been used for extracting short, descriptive phrases from the original lengthy text ( BIBREF10 ). Topic models have been used to distill a document into a set of topics used to form query ( BIBREF11 ). BIBREF6 generated queries using the top TF*IDF weighted terms in each document. BIBREF4 suggested extracting phrasal concepts from a document, which are then used to generate queries. BIBREF2 combined query extraction and pseudo-relevance feedback for patent retrieval. BIBREF9 employ supervised machine learning model (i.e., Conditional Random Fields) ( BIBREF12 ) for query generation. BIBREF13 explored ontology to identify chemical concepts for queries.
There are also many biomedical-document specific search engines available. Many information retrieval systems focus on question answering systems such as those developed for the TREC Genomics Track ( BIBREF14 ) or BioASQ Question-Answer ( BIBREF15 ) competitions. Systems designed for question-answering use a combination of natural language processing techniques to identify biomedical entities, and then information retrieval systems to extract relevant answers to questions. Systems like those detailed in BIBREF16 can provide answers to yes/no biomedical questions with high precision. However what we propose differs from these systems in a fundamental way: given a specific document, suggest the most important documents that are related to it.
The body of work most related to ours is that of citation recommendation. The goal of citation recommendation is to suggest a small number of publications that can be used as high quality references for a particular article ( BIBREF17 , BIBREF1 ). Topic models have been used to rank articles based on the similarity of latent topic distribution ( BIBREF11 , BIBREF18 , BIBREF1 ). These models attempt to decompose a document into a few important keywords. Specifically, these models attempt to find a latent vector representation of a document that has a much smaller dimensionality than the document itself and compare the reduced dimension vectors.
Citation networks have also been explored for ranking articles by importance, i.e., authority ( BIBREF19 , BIBREF20 ). BIBREF17 introduced heterogeneous network models, called meta-path based models, to incorporate venues (the conference where a paper is published) and content (the term which links two articles, for citation recommendation). Another highly relevant work is BIBREF8 who decomposed a document to represent it with a compact vector, which is then used to measure the similarity with other documents. Note that we exclude the work of context-aware recommendation, which analyze each citation's local context, which is typically short and does not represent a full document.
One of the key contributions of our study is an innovative approach for automatically generating a query-by-document gold standard. Crowd-sourcing has generated large databases, including Wikipedia and Freebase. Recently, BIBREF21 concluded that unpaid participants performed better than paid participants for question answering. They attribute this to unpaid participants being more intrinsically motivated than the paid test takers: they performed the task for fun and already had knowledge about the subject being tested. In contrast, another study, BIBREF22 , compared unpaid workers found through Google Adwords (GA) to paid workers found through Amazon Mechanical Turk (AMT). They found that the paid participants from AMT outperform the unpaid ones. This is attributed to the paid workers being more willing to look up information they didn't know. In the bibliographic domain, authors of scientific publications have contributed annotations ( BIBREF23 ). They found that authors are more willing to annotate their own publications ( BIBREF23 ) than to annotate other publications ( BIBREF24 ) even though they are paid. In this work, our annotated dataset was created by the unpaid authors of the articles.
Benchmark Datasets
In order to develop and evaluate ranking algorithms we need a benchmark dataset. However, to the best of our knowledge, we know of no openly available benchmark dataset for bibliographic query-by-document systems. We therefore created such a benchmark dataset.
The creation of any benchmark dataset is a daunting labor-intensive task, and in particular, challenging in the scientific domain because one must master the technical jargon of a scientific article, and such experts are not easy to find when using traditional crowd-sourcing technologies (e.g., AMT). For our task, the ideal annotator for each of our articles are the authors themselves. The authors of a publication typically have a clear knowledge of the references they cite and their scientific importance to their publication, and therefore may be excellent judges for ranking the reference articles.
Given the full text of a scientific publication, we want to rank its citations according to the author's judgments. We collected recent publications from the open-access PLoS journals and asked the authors to rank by closeness five citations we selected from their paper. PLoS articles were selected because its journals cover a wide array of topics and the full text articles are available in XML format. We selected the most recent publications as previous work in crowd-sourcing annotation shows that authors' willingness to participate in an unpaid annotation task declines with the age of publication ( BIBREF23 ). We then extracted the abstract, citations, full text, authors, and corresponding author email address from each document. The titles and abstracts of the citations were retrieved from PubMed, and the cosine similarity between the PLoS abstract and the citation's abstract was calculated. We selected the top five most similar abstracts using TF*IDF weighted cosine similarity, shuffled their order, and emailed them to the corresponding author for annotation. We believe that ranking five articles (rather than the entire collection of the references) is a more manageable task for an author compared to asking them to rank all references. Because the documents to be annotated were selected based on text similarity, they also represent a challenging baseline for models based on text-similarity features. In total 416 authors were contacted, and 92 responded (22% response rate). Two responses were removed from the dataset for incomplete annotation.
We asked authors to rank documents by how “close to your work” they were. The definition of closeness was left to the discretion of the author. The dataset is composed of 90 annotated documents with 5 citations each ranked 1 to 5, where 1 is least relevant and 5 is most relevant for a total of 450 annotated citations.
Learning to Rank
Learning-to-rank is a technique for reordering the results returned from a search engine query. Generally, the initial query to a search engine is concerned more with recall than precision: the goal is to obtain a subset of potentially related documents from the corpus. Then, given this set of potentially related documents, learning-to-rank algorithms reorder the documents such that the most relevant documents appear at the top of the list. This process is illustrated in Figure FIGREF6 .
There are three basic types of learning-to-rank algorithms: point-wise, pair-wise, and list-wise. Point-wise algorithms assign a score to each retrieved document and rank them by their scores. Pair-wise algorithms turn learning-to-rank into a binary classification problem, obtaining a ranking by comparing each individual pair of documents. List-wise algorithms try to optimize an evaluation parameter over all queries in the dataset.
Support Vector Machine (SVM) ( BIBREF25 ) is a commonly used supervised classification algorithm that has shown good performance over a range of tasks. SVM can be thought of as a binary linear classifier where the goal is to maximize the size of the gap between the class-separating line and the points on either side of the line. This helps avoid over-fitting on the training data. SVMRank is a modification to SVM that assigns scores to each data point and allows the results to be ranked ( BIBREF26 ). We use SVMRank in the experiments below. SVMRank has previously been used in the task of document retrieval in ( BIBREF27 ) for a more traditional short query task and has been shown to be a top-performing system for ranking.
SVMRank is a point-wise learning-to-rank algorithm that returns scores for each document. We rank the documents by these scores. It is possible that sometimes two documents will have the same score, resulting in a tie. In this case, we give both documents the same rank, and then leave a gap in the ranking. For example, if documents 2 and 3 are tied, their ranked list will be [5, 3, 3, 2, 1].
Models are trained by randomly splitting the dataset into 70% training data and 30% test data. We apply a random sub-sampling approach where the dataset is randomly split, trained, and tested 100 times due to the relatively small size of the data. A model is learned for each split and a ranking is produced for each annotated document.
We test three different supervised models. The first supervised model uses only text similarity features, the second model uses all of the features, and the third model runs forward feature selection to select the best performing combination of features. We also test using two different models trained on two different datasets: one trained using the gold standard annotations, and another trained using the judgments based on text similarity that were used to select the citations to give to the authors.
We tested several different learning to rank algorithms for this work. We found in preliminary testing that SVMRank had the best performance, so it will be used in the following experiments.
Features
Each citation is turned into a feature vector representing the relationship between the published article and the citation. Four types of features are used: text similarity, citation count and location, age of the citation, and the number of times the citation has appeared in the literature (citation impact). Text similarity features measure the similarity of the words used in different parts of the document. In this work, we calculate the similarity between a document INLINEFORM0 and a document it cites INLINEFORM1 by transforming the their text into term vectors. For example, to calculate the similarity of the abstracts between INLINEFORM2 and INLINEFORM3 we transform the abstracts into two term vectors, INLINEFORM4 and INLINEFORM5 . The length of each of the term vectors is INLINEFORM6 . We then weight each word by its Term-frequency * Inverse-document frequency (TF*IDF) weight. TF*IDF is a technique to give higher weight to words that appear frequently in a document but infrequently in the corpus. Term frequency is simply the number of times that a word INLINEFORM7 appears in a document. Inverse-document frequency is the logarithmically-scaled fraction of documents in the corpus in which the word INLINEFORM8 appears. Or, more specifically: INLINEFORM9
where INLINEFORM0 is the total number of documents in the corpus, and the denominator is the number of documents in which a term INLINEFORM1 appears in the corpus INLINEFORM2 . Then, TF*IDF is defined as: INLINEFORM3
where INLINEFORM0 is a term, INLINEFORM1 is the document, and INLINEFORM2 is the corpus. For example, the word “the” may appear often in a document, but because it also appears in almost every document in the corpus it is not useful for calculating similarity, thus it receives a very low weight. However, a word such as “neurogenesis” may appear often in a document, but does not appear frequently in the corpus, and so it receives a high weight. The similarity between term vectors is then calculated using cosine similarity: INLINEFORM3
where INLINEFORM0 and INLINEFORM1 are two term vectors. The cosine similarity is a measure of the angle between the two vectors. The smaller the angle between the two vectors, i.e., the more similar they are, then the closer the value is to 1. Conversely, the more dissimilar the vectors, the closer the cosine similarity is to 0.
We calculate the text similarity between several different sections of the document INLINEFORM0 and the document it cites INLINEFORM1 . From the citing article INLINEFORM2 , we use the title, full text, abstract, the combined discussion/conclusion sections, and the 10 words on either side of the place in the document where the actual citation occurs. From the document it cites INLINEFORM3 we only use the title and the abstract due to limited availability of the full text. In this work we combine the discussion and conclusion sections of each document because some documents have only a conclusion section, others have only a discussion, and some have both. The similarity between each of these sections from the two documents is calculated and used as features in the model.
The age of the citation may be relevant to its importance. As a citation ages, we hypothesize that it is more likely to become a “foundational” citation rather than one that directly influenced the development of the article. Therefore more recent citations may be more likely relevant to the article. Similarly, “citation impact”, that is, the number of times a citation has appeared in the literature (as measured by Google Scholar) may be an indicator of whether or not an article is foundational rather than directly related. We hypothesize that the fewer times an article is cited in the literature, the more impact it had on the article at hand.
We also keep track of the number of times a citation is mentioned in both the full text and discussion/conclusion sections. We hypothesize that if a citation is mentioned multiple times, it is more important than citations that are mentioned only once. Further, citations that appear in the discussion/conclusion sections are more likely to be crucial to understanding the results. We normalize the counts of the citations by the total number of citations in that section. In total we select 15 features, shown in Table TABREF15 . The features are normalized within each document so that each of citation features is on a scale from 0 to 1, and are evenly distributed within that range. This is done because some of the features (such as years since citation) are unbounded.
Baseline Systems
We compare our system to a variety of baselines. (1) Rank by the number of times a citation is mentioned in the document. (2) Rank by the number of times the citation is cited in the literature (citation impact). (3) Rank using Google Scholar Related Articles. (4) Rank by the TF*IDF weighted cosine similarity. (5) Rank using a learning-to-rank model trained on text similarity rankings. The first two baseline systems are models where the values are ordered from highest to lowest to generate the ranking. The idea behind them is that the number of times a citation is mentioned in an article, or the citation impact may already be good indicators of their closeness. The text similarity model is trained using the same features and methods used by the annotation model, but trained using text similarity rankings instead of the author's judgments.
We also compare our rankings to those found on the popular scientific article search engine Google Scholar. Google Scholar is a “black box” IR system: they do not release details about which features they are using and how they judge relevance of documents. Google Scholar provides a “Related Articles” feature for each document in its index that shows the top 100 related documents for each article. To compare our rankings, we search through these related documents and record the ranking at which each of the citations we selected appeared. We scale these rankings such that the lowest ranked article from Google Scholar has the highest relevance ranking in our set. If the cited document does not appear in the set, we set its relevance-ranking equal to one below the lowest relevance ranking found.
Four comparisons are performed with the Google Scholar data. (1) We first train a model using our gold standard and see if we can predict Google Scholar's ranking. (2) We compare to a baseline of using Google Scholar's rankings to train and compare with their own rankings using our feature set. (3) Then we train a model using Google Scholar's rankings and try to predict our gold standard. (4) We compare it to the model trained on our gold standard to predict our gold standard.
Evaluation Measures
Normalized Discounted Cumulative Gain (NDCG) is a common measure for comparing a list of estimated document relevance judgments with a list of known judgments ( BIBREF28 ). To calculate NDCG we first calculate a ranking's Discounted Cumulative Gain (DCG) as: DISPLAYFORM0
where rel INLINEFORM0 is the relevance judgment at position INLINEFORM1 . Intuitively, DCG penalizes retrieval of documents that are not relevant (rel INLINEFORM2 ). However, DCG is an unbounded value. In order to compare the DCG between two models, we must normalize it. To do this, we use the ideal DCG (IDCG), i.e., the maximum possible DCG given the relevance judgments. The maximum possible DCG occurs when the relevance judgments are in the correct order. DISPLAYFORM0
The NDCG value is in the range of 0 to 1, where 0 means that no relevant documents were retrieved, and 1 means that the relevant documents were retrieved and in the correct order of their relevance judgments.
Kendall's INLINEFORM0 is a measure of the correlation between two ranked lists. It compares the number of concordant pairs with the number of discordant pairs between each list. A concordant pair is defined over two observations INLINEFORM1 and INLINEFORM2 . If INLINEFORM3 and INLINEFORM4 , then the pair at indices INLINEFORM5 is concordant, that is, the ranking at INLINEFORM6 in both ranking sets INLINEFORM7 and INLINEFORM8 agree with each other. Similarly, a pair INLINEFORM9 is discordant if INLINEFORM10 and INLINEFORM11 or INLINEFORM12 and INLINEFORM13 . Kendall's INLINEFORM14 is then defined as: DISPLAYFORM0
where C is the number of concordant pairs, D is the number of discordant pairs, and the denominator represents the total number of possible pairs. Thus, Kendall's INLINEFORM0 falls in the range of INLINEFORM1 , where -1 means that the ranked lists are perfectly negatively correlated, 0 means that they are not significantly correlated, and 1 means that the ranked lists are perfectly correlated. One downside of this measure is that it does not take into account where in the ranked list an error occurs. Information retrieval, in general, cares more about errors near the top of the list rather than errors near the bottom of the list.
Average-Precision INLINEFORM0 ( BIBREF29 ) (or INLINEFORM1 ) extends on Kendall's INLINEFORM2 by incorporating the position of errors. If an error occurs near the top of the list, then that is penalized heavier than an error occurring at the bottom of the list. To achieve this, INLINEFORM3 incorporates ideas from the popular Average Precision measure, were we calculate the precision at each index of the list and then average them together. INLINEFORM4 is defined as: DISPLAYFORM0
Intuitively, if an error occurs at the top of the list, then that error is propagated into each iteration of the summation, meaning that it's penalty is added multiple times. INLINEFORM0 's range is between -1 and 1, where -1 means the lists are perfectly negatively correlated, 0 means that they are not significantly correlated, and 1 means that they are perfectly correlated.
Forward Feature Selection
Forward feature selection was performed by iteratively testing each feature one at a time. The highest performing feature is kept in the model, and another sweep is done over the remaining features. This continues until all features have been selected. This approach allows us to explore the effect of combinations of features and the effect of having too many or too few features. It also allows us to evaluate which features and combinations of features are the most powerful.
Results
We first compare our gold standard to the baselines. A random baseline is provided for reference. Because all of the documents that we rank are relevant, NDCG will be fairly high simply by chance. We find that the number of times a document is mentioned in the annotated document is significantly better than the random baseline or the citation impact. The more times a document is mentioned in a paper, the more likely the author was to annotate it as important. Interestingly, we see a negative correlation with the citation impact. The more times a document is mentioned in the literature, the less likely it is to be important. These results are shown in Table TABREF14 .
Next we rank the raw values of the features and compare them to our gold standard to obtain a baseline (Table TABREF15 ). The best performing text similarity feature is the similarity between the abstract of the annotated document and the abstract of the cited document. However, the number of times that a cited document is mentioned in the text of the annotated document are also high-scoring features, especially in the INLINEFORM0 correlation coefficient. These results indicate that text similarity alone may not be a good measure for judging the rank of a document.
Next we test three different feature sets for our supervised learning-to-rank models. The model using only the text similarity features performs poorly: NDCG stays at baseline and the correlation measures are low. Models that incorporate information about the age, number of times a cited document was referenced, and the citation impact of that document in addition to the text similarity features significantly outperformed models that used only text similarity features INLINEFORM0 . Because INLINEFORM1 takes into account the position in the ranking of the errors, this indicates that the All Features model was able to better correctly place highly ranked documents above lower ranked ones. Similarly, because Kendall's INLINEFORM2 is an overall measure of correlation that does not take into account the position of errors, the higher value here means that more rankings were correctly placed. Interestingly, feature selection (which is optimized for NDCG) does not outperform the model using all of the features in terms of our correlation measures. The features chosen during forward feature selection are (1) the citation impact, (2) number of mentions in the full text, (3) text similarity between the annotated document's title and the referenced document's abstract, (4) the text similarity between the annotated document's discussion/conclusion section and the referenced document's title. These results are shown in Table TABREF16 . The models trained on the text similarity judgments perform worse than the models trained on the annotated data. However, in terms of both NDCG and the correlation measures, they perform significantly better than the random baseline.
Next we compare our model to Google Scholar's rankings. Using the ranking collected from Google Scholar, we build a training set to try to predict our authors' rankings. We find that Google Scholar performs similarly to the text-only features model. This indicates that the rankings we obtained from the authors are substantially different than the rankings that Google Scholar provides. Results appear in Table TABREF17 .
Discussion
We found that authors rank the references they cite substantially differently from rankings based on text-similarity. Our results show that decomposing a document into a set of features that is able to capture that difference is key. While text similarity is indeed important (as evidenced by the Similarity(a,a) feature in Table TABREF15 ), we also found that the number of times a document is referenced in the text and the number of times a document is referenced in the literature are also both important features (via feature selection). The more often a citation is mentioned in the text, the more likely it is to be important. This feature is often overlooked in article citation recommendation. We also found that recency is important: the age of the citation is negatively correlated with the rank. Newer citations are more likely to be directly important than older, more foundational citations. Additionally, the number of times a document is cited in the literature is negatively correlated with rank. This is likely due to highly cited documents being more foundational works; they may be older papers that are important to the field but not directly influential to the new work.
The model trained using the author's judgments does significantly better than the model trained using the text-similarity-based judgments. An error analysis was performed to find out why some of the rankings disagreed with the author's annotations. We found that in some cases our features were unable to capture the relationship: for example a biomedical document applying a model developed in another field to the dataset may use very different language to describe the model than the citation. Previous work adopting topic models to query document search may prove useful for such cases.
A small subset of features ended up performing as well as the full list of features. The number of times a citation was mentioned and the citation impact score in the literature ended up being two of the most important features. Indeed, without the citation-based features, the model performs as though it were trained with the text-similarity rankings. Feature engineering is a part of any learning-to-rank system, especially in domain-specific contexts. Citations are an integral feature of our dataset. For learning-to-rank to be applied to other datasets feature engineering must also occur to exploit the unique properties of those datasets. However, we show that combining the domain-specific features with more traditional text-based features does improve the model's scores over simply using the domain-specific features themselves.
Interestingly, citation impact and age of the citation are both negatively correlated with rank. We hypothesize that this is because both measures can be indicators of recency: a new publication is more likely to be directly influenced by more recent work. Many other related search tools, however, treat the citation impact as a positive feature of relatedness: documents with a higher citation impact appear higher on the list of related articles than those with lower citation impacts. This may be the opposite of what the user actually desires.
We also found that rankings from our text-similarity based IR system or Google Scholar's IR system were unable to rank documents by the authors' annotations as well as our system. In one sense, this is reasonable: the rankings coming from these systems were from a different system than the author annotations. However, in domain-specific IR, domain experts are the best judges. We built a system that exploits these expert judgments. The text similarity and Google Scholar models were able to do this to some extent, performing above the random baseline, but not on the level of our model.
Additionally, we observe that NDCG may not be the most appropriate measure for comparing short ranked lists where all of the documents are relevant to some degree. NDCG gives a lot of credit to relevant documents that occur in the highest ranks. However, all of the documents here are relevant, just to varying degrees. Thus, NDCG does not seem to be the most appropriate measure, as is evident in our scores. The correlation coefficients from Kendall's INLINEFORM0 and INLINEFORM1 seem to be far more appropriate for this case, as they are not concerned with relevance, only ranking.
One limitation of our work is that we selected a small set of references based on their similarities to the article that cites them. Ideally, we would have had authors rank all of their citations for us, but this would have been a daunting task for authors to perform. We chose to use the Google Scholar dataset in order to attempt to mitigate this: we obtain a ranking for the set of references from a system that is also ranking many other documents. The five citations selected by TF*IDF weighted cosine similarity represent a “hard” gold standard: we are attempting to rank documents that are known to all be relevant by their nature, and have high similarity with the text. Additionally, there are plethora of other, more expensive features we could explore to improve the model. Citation network features, phrasal concepts, and topic models could all be used to help improve our results, at the cost of computational complexity.
We have developed a model for fast related-document ranking based on crowd-sourced data. The model, data, and data collection software are all publicly available and can easily be used in future applications as an automatic search to help users find the most important citations given a particular document. The experimental setup is portable to other datasets with some feature engineering. We were able to identify that several domain-specific features were crucial to our model, and that we were able to improve on the results of simply using those features alone by adding more traditional features.
Query-by-document is a complicated and challenging task. We provide an approach with an easily obtained dataset and a computationally inexpensive model. By working with biomedical researchers we were able to build a system that ranks documents in a quantitatively different way than previous systems, and to provide a tool that helps researchers find related documents.
Acknowledgments
We would like to thank all of the authors who took the time to answer our citation ranking survey. This work is supported by National Institutes of Health with the grant number 1R01GM095476. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. | asked the authors to rank by closeness five citations we selected from their paper |
1951cde612751410355610074c3c69cec94824c2 | 1951cde612751410355610074c3c69cec94824c2_0 | Q: Which deep learning model performed better?
Text: Introduction
In recent years, social media, forums, blogs and other forms of online communication tools have radically affected everyday life, especially how people express their opinions and comments. The extraction of useful information (such as people's opinion about companies brand) from the huge amount of unstructured data is vital for most companies and organizations BIBREF0 . The product reviews are important for business owners as they can take business decision accordingly to automatically classify user’s opinions towards products and services. The application of sentiment analysis is not limited to product or movie reviews but can be applied to different fields such as news, politics, sport etc. For example, in online political debates, the sentiment analysis can be used to identify people's opinions on a certain election candidate or political parties BIBREF1 BIBREF2 BIBREF3 . In this context, sentiment analysis has been widely used in different languages by using traditional and advanced machine learning techniques. However, limited research has been conducted to develop models for the Persian language.
The sentiment analysis is a method to automatically process large amounts of data and classify text into positive or negative sentiments) BIBREF4 BIBREF5 . Sentiment analysis can be performed at two levels: at the document level or at the sentence level. At document level it is used to classify the sentiment expressed in the document (positive or negative), whereas, at sentence level is used to identify the sentiments expressed only in the sentence under analysis BIBREF6 BIBREF7 .
In the literature, deep learning based automated feature extraction has been shown to outperform state-of-the-art manual feature engineering based classifiers such as Support Vector Machine (SVM), Naive Bayes (NB) or Multilayer Perceptron (MLP) etc. One of the important techniques in deep learning is the autoencoder that generally involves reducing the number of feature dimensions under consideration. The aim of dimensionality reduction is to obtain a set of principal variables to improve the performance of the approach. Similarly, CNNs have been proven to be very effective in sentiment analysis. However, little work has been carried out to exploit deep learning based feature representation for Persian sentiment analysis BIBREF8 BIBREF9 . In this paper, we present two deep learning models (deep autoencoders and CNNs) for Persian sentiment analysis. The obtained deep learning results are compared with MLP.
The rest of the paper is organized as follows: Section 2 presents related work. Section 3 presents methodology and experimental results. Finally, section 4 concludes this paper.
Related Works
In the literature, extensive research has been carried out to model novel sentiment analysis models using both shallow and deep learning algorithms. For example, the authors in BIBREF10 proposed a novel deep learning approach for polarity detection in product reviews. The authors addressed two major limitations of stacked denoising of autoencoders, high computational cost and the lack of scalability of high dimensional features. Their experimental results showed the effectiveness of proposed autoencoders in achieving accuracy upto 87%. Zhai et al., BIBREF11 proposed a five layers autoencoder for learning the specific representation of textual data. The autoencoders are generalised using loss function and derived discriminative loss function from label information. The experimental results showed that the model outperformed bag of words, denoising autoencoders and other traditional methods, achieving accuracy rate up to 85% . Sun et al., BIBREF12 proposed a novel method to extract contextual information from text using a convolutional autoencoder architecture. The experimental results showed that the proposed model outperformed traditional SVM and Nave Bayes models, reporting accuracy of 83.1 %, 63.9% and 67.8% respectively.
Su et al., BIBREF13 proposed an approach for a neural generative autoencoder for learning bilingual word embedding. The experimental results showed the effectiveness of their approach on English-Chinese, English-German, English-French and English-Spanish (75.36% accuracy). Kim et al., BIBREF14 proposed a method to capture the non-linear structure of data using CNN classifier. The experimental results showed the effectiveness of the method on the multi-domain dataset (movie reviews and product reviews). However, the disadvantage is only SVM and Naive Bayes classifiers are used to evaluate the performance of the method and deep learning classifiers are not exploited. Zhang et al., BIBREF15 proposed an approach using deep learning classifiers to detect polarity in Japanese movie reviews. The approach used denoising autoencoder and adapted to other domains such as product reviews. The advantage of the approach is not depended on any language and could be used for various languages by applying different datasets. AP et al., BIBREF16 proposed a CNN based model for cross-language learning of vectorial word representations that is coherent between two languages. The method is evaluated using English and German movie reviews dataset. The experimental results showed CNN (83.45% accuracy) outperformed as compared to SVM (65.25% accuracy).
Zhou et al., BIBREF17 proposed an autoencoder architecture constituting an LSTM-encoder and decoder in order to capture features in the text and reduce dimensionality of data. The LSTM encoder used the interactive scheme to go through the sequence of sentences and LSTM decoder reconstructed the vector of sentences. The model is evaluated using different datasets such as book reviews, DVD reviews, and music reviews, acquiring accuracy up to 81.05%, 81.06%, and 79.40% respectively. Mesnil et al., BIBREF18 proposed an approach using ensemble classification to detect polarity in the movie reviews. The authors combined several machine learning algorithms such as SVM, Naive Bayes and RNN to achieve better results, where autoencoders were used to reduce the dimensionality of features. The experimental results showed the combination of unigram, bigram and trigram features (91.87% accuracy) outperformed unigram (91.56% accuracy) and bigram (88.61% accuracy).
Scheible et al., BIBREF19 trained an approach using semi-supervised recursive autoencoder to detect polarity in movie reviews dataset, consisted of 5000 positive and 5000 negative sentiments. The experimental results demonstrated that the proposed approach successfully detected polarity in movie reviews dataset (83.13% accuracy) and outperformed standard SVM (68.36% accuracy) model. Dai et al., BIBREF20 developed an autoencoder to detect polarity in the text using deep learning classifier. The LSTM was trained on IMDB movie reviews dataset. The experimental results showed the outperformance of their proposed approach over SVM. In table 1 some of the autoencoder approaches are depicted.
Methodology and Experimental Results
The novel dataset used in this work was collected manually and includes Persian movie reviews from 2014 to 2016. A subset of dataset was used to train the neural network (60% training dataset) and rest of the data (40%) was used to test and validate the performance of the trained neural network (testing set (30%), validation set (10%)). There are two types of labels in the dataset: positive or negative. The reviews were manually annotated by three native Persian speakers aged between 30 and 50 years old.
After data collection, the corpus was pre-processed using tokenisation, normalisation and stemming techniques. The process of converting sentences into single word or token is called tokenisation. For example, "The movie is great" is changed to "The", "movie", "is", "great" BIBREF21 . There are some words which contain numbers. For example, "great" is written as "gr8" or "gooood" as written as "good" . The normalisation is used to convert these words into normal forms BIBREF22 . The process of converting words into their root is called stemming. For example, going was changed to go BIBREF23 . Words were converted into vectors. The fasttext was used to convert each word into 300-dimensions vectors. Fasttext is a library for text classification and representation BIBREF24 BIBREF25 BIBREF9 .
For classification, MLP, autoencoders and CNNs have been used. Fig. 1. depicts the modelled MLP architectures. MLP classifer was trained for 100 iterations BIBREF26 . Fig. 2. depicts the modelled autoencoder architecture. Autoencoder is a feed-forward deep neural network with unsupervised learning and it is used for dimensionality reduction. The autoencoder consists of input, output and hidden layers. Autoencoder is used to compress the input into a latent-space and then the output is reconstructed BIBREF27 BIBREF28 BIBREF29 . The exploited autoencoder model is depcited in Fig. 1. The autoencoder consists of one input layer three hidden layers (1500, 512, 1500) and an output layer. Convolutional Neural Networks contains three layers (input, hidden and output layer). The hidden layer consists of convolutional layers, pooling layers, fully connected layers and normalisation layer. The INLINEFORM0 is denotes the hidden neurons of j, with bias of INLINEFORM1 , is a weight sum over continuous visible nodes v which is given by: DISPLAYFORM0
The modelled CNN architecture is depicted in Fig. 3 BIBREF29 BIBREF28 . For CNN modelling, each utterance was represented as a concatenation vector of constituent words. The network has total 11 layers: 4 convolution layers, 4 max pooling and 3 fully connected layers. Convolution layers have filters of size 2 and with 15 feature maps. Each convolution layer is followed by a max polling layer with window size 2. The last max pooling layer is followed by fully connected layers of size 5000, 500 and 4. For final layer, softmax activation is used.
To evaluate the performance of the proposed approach, precision (1), recall (2), f-Measure (3), and prediction accuracy (4) have been used as a performance matrices. The experimental results are shown in Table 1, where it can be seen that autoencoders outperformed MLP and CNN outperformed autoencoders with the highest achieved accuracy of 82.6%. DISPLAYFORM0 DISPLAYFORM1
where TP is denotes true positive, TN is true negative, FP is false positive, and FN is false negative.
Conclusion
Sentiment analysis has been used extensively for a wide of range of real-world applications, ranging from product reviews, surveys feedback, to business intelligence, and operational improvements. However, the majority of research efforts are devoted to English-language only, where information of great importance is also available in other languages. In this work, we focus on developing sentiment analysis models for Persian language, specifically for Persian movie reviews. Two deep learning models (deep autoencoders and deep CNNs) are developed and compared with the the state-of-the-art shallow MLP based machine learning model. Simulations results revealed the outperformance of our proposed CNN model over autoencoders and MLP. In future, we intend to exploit more advanced deep learning models such as Long Short-Term Memory (LSTM) and LSTM-CNNs to further evaluate the performance of our developed novel Persian dataset.
Acknowledgment
Amir Hussain and Ahsan Adeel were supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant No.EP/M026981/1. | autoencoders |
1951cde612751410355610074c3c69cec94824c2 | 1951cde612751410355610074c3c69cec94824c2_1 | Q: Which deep learning model performed better?
Text: Introduction
In recent years, social media, forums, blogs and other forms of online communication tools have radically affected everyday life, especially how people express their opinions and comments. The extraction of useful information (such as people's opinion about companies brand) from the huge amount of unstructured data is vital for most companies and organizations BIBREF0 . The product reviews are important for business owners as they can take business decision accordingly to automatically classify user’s opinions towards products and services. The application of sentiment analysis is not limited to product or movie reviews but can be applied to different fields such as news, politics, sport etc. For example, in online political debates, the sentiment analysis can be used to identify people's opinions on a certain election candidate or political parties BIBREF1 BIBREF2 BIBREF3 . In this context, sentiment analysis has been widely used in different languages by using traditional and advanced machine learning techniques. However, limited research has been conducted to develop models for the Persian language.
The sentiment analysis is a method to automatically process large amounts of data and classify text into positive or negative sentiments) BIBREF4 BIBREF5 . Sentiment analysis can be performed at two levels: at the document level or at the sentence level. At document level it is used to classify the sentiment expressed in the document (positive or negative), whereas, at sentence level is used to identify the sentiments expressed only in the sentence under analysis BIBREF6 BIBREF7 .
In the literature, deep learning based automated feature extraction has been shown to outperform state-of-the-art manual feature engineering based classifiers such as Support Vector Machine (SVM), Naive Bayes (NB) or Multilayer Perceptron (MLP) etc. One of the important techniques in deep learning is the autoencoder that generally involves reducing the number of feature dimensions under consideration. The aim of dimensionality reduction is to obtain a set of principal variables to improve the performance of the approach. Similarly, CNNs have been proven to be very effective in sentiment analysis. However, little work has been carried out to exploit deep learning based feature representation for Persian sentiment analysis BIBREF8 BIBREF9 . In this paper, we present two deep learning models (deep autoencoders and CNNs) for Persian sentiment analysis. The obtained deep learning results are compared with MLP.
The rest of the paper is organized as follows: Section 2 presents related work. Section 3 presents methodology and experimental results. Finally, section 4 concludes this paper.
Related Works
In the literature, extensive research has been carried out to model novel sentiment analysis models using both shallow and deep learning algorithms. For example, the authors in BIBREF10 proposed a novel deep learning approach for polarity detection in product reviews. The authors addressed two major limitations of stacked denoising of autoencoders, high computational cost and the lack of scalability of high dimensional features. Their experimental results showed the effectiveness of proposed autoencoders in achieving accuracy upto 87%. Zhai et al., BIBREF11 proposed a five layers autoencoder for learning the specific representation of textual data. The autoencoders are generalised using loss function and derived discriminative loss function from label information. The experimental results showed that the model outperformed bag of words, denoising autoencoders and other traditional methods, achieving accuracy rate up to 85% . Sun et al., BIBREF12 proposed a novel method to extract contextual information from text using a convolutional autoencoder architecture. The experimental results showed that the proposed model outperformed traditional SVM and Nave Bayes models, reporting accuracy of 83.1 %, 63.9% and 67.8% respectively.
Su et al., BIBREF13 proposed an approach for a neural generative autoencoder for learning bilingual word embedding. The experimental results showed the effectiveness of their approach on English-Chinese, English-German, English-French and English-Spanish (75.36% accuracy). Kim et al., BIBREF14 proposed a method to capture the non-linear structure of data using CNN classifier. The experimental results showed the effectiveness of the method on the multi-domain dataset (movie reviews and product reviews). However, the disadvantage is only SVM and Naive Bayes classifiers are used to evaluate the performance of the method and deep learning classifiers are not exploited. Zhang et al., BIBREF15 proposed an approach using deep learning classifiers to detect polarity in Japanese movie reviews. The approach used denoising autoencoder and adapted to other domains such as product reviews. The advantage of the approach is not depended on any language and could be used for various languages by applying different datasets. AP et al., BIBREF16 proposed a CNN based model for cross-language learning of vectorial word representations that is coherent between two languages. The method is evaluated using English and German movie reviews dataset. The experimental results showed CNN (83.45% accuracy) outperformed as compared to SVM (65.25% accuracy).
Zhou et al., BIBREF17 proposed an autoencoder architecture constituting an LSTM-encoder and decoder in order to capture features in the text and reduce dimensionality of data. The LSTM encoder used the interactive scheme to go through the sequence of sentences and LSTM decoder reconstructed the vector of sentences. The model is evaluated using different datasets such as book reviews, DVD reviews, and music reviews, acquiring accuracy up to 81.05%, 81.06%, and 79.40% respectively. Mesnil et al., BIBREF18 proposed an approach using ensemble classification to detect polarity in the movie reviews. The authors combined several machine learning algorithms such as SVM, Naive Bayes and RNN to achieve better results, where autoencoders were used to reduce the dimensionality of features. The experimental results showed the combination of unigram, bigram and trigram features (91.87% accuracy) outperformed unigram (91.56% accuracy) and bigram (88.61% accuracy).
Scheible et al., BIBREF19 trained an approach using semi-supervised recursive autoencoder to detect polarity in movie reviews dataset, consisted of 5000 positive and 5000 negative sentiments. The experimental results demonstrated that the proposed approach successfully detected polarity in movie reviews dataset (83.13% accuracy) and outperformed standard SVM (68.36% accuracy) model. Dai et al., BIBREF20 developed an autoencoder to detect polarity in the text using deep learning classifier. The LSTM was trained on IMDB movie reviews dataset. The experimental results showed the outperformance of their proposed approach over SVM. In table 1 some of the autoencoder approaches are depicted.
Methodology and Experimental Results
The novel dataset used in this work was collected manually and includes Persian movie reviews from 2014 to 2016. A subset of dataset was used to train the neural network (60% training dataset) and rest of the data (40%) was used to test and validate the performance of the trained neural network (testing set (30%), validation set (10%)). There are two types of labels in the dataset: positive or negative. The reviews were manually annotated by three native Persian speakers aged between 30 and 50 years old.
After data collection, the corpus was pre-processed using tokenisation, normalisation and stemming techniques. The process of converting sentences into single word or token is called tokenisation. For example, "The movie is great" is changed to "The", "movie", "is", "great" BIBREF21 . There are some words which contain numbers. For example, "great" is written as "gr8" or "gooood" as written as "good" . The normalisation is used to convert these words into normal forms BIBREF22 . The process of converting words into their root is called stemming. For example, going was changed to go BIBREF23 . Words were converted into vectors. The fasttext was used to convert each word into 300-dimensions vectors. Fasttext is a library for text classification and representation BIBREF24 BIBREF25 BIBREF9 .
For classification, MLP, autoencoders and CNNs have been used. Fig. 1. depicts the modelled MLP architectures. MLP classifer was trained for 100 iterations BIBREF26 . Fig. 2. depicts the modelled autoencoder architecture. Autoencoder is a feed-forward deep neural network with unsupervised learning and it is used for dimensionality reduction. The autoencoder consists of input, output and hidden layers. Autoencoder is used to compress the input into a latent-space and then the output is reconstructed BIBREF27 BIBREF28 BIBREF29 . The exploited autoencoder model is depcited in Fig. 1. The autoencoder consists of one input layer three hidden layers (1500, 512, 1500) and an output layer. Convolutional Neural Networks contains three layers (input, hidden and output layer). The hidden layer consists of convolutional layers, pooling layers, fully connected layers and normalisation layer. The INLINEFORM0 is denotes the hidden neurons of j, with bias of INLINEFORM1 , is a weight sum over continuous visible nodes v which is given by: DISPLAYFORM0
The modelled CNN architecture is depicted in Fig. 3 BIBREF29 BIBREF28 . For CNN modelling, each utterance was represented as a concatenation vector of constituent words. The network has total 11 layers: 4 convolution layers, 4 max pooling and 3 fully connected layers. Convolution layers have filters of size 2 and with 15 feature maps. Each convolution layer is followed by a max polling layer with window size 2. The last max pooling layer is followed by fully connected layers of size 5000, 500 and 4. For final layer, softmax activation is used.
To evaluate the performance of the proposed approach, precision (1), recall (2), f-Measure (3), and prediction accuracy (4) have been used as a performance matrices. The experimental results are shown in Table 1, where it can be seen that autoencoders outperformed MLP and CNN outperformed autoencoders with the highest achieved accuracy of 82.6%. DISPLAYFORM0 DISPLAYFORM1
where TP is denotes true positive, TN is true negative, FP is false positive, and FN is false negative.
Conclusion
Sentiment analysis has been used extensively for a wide of range of real-world applications, ranging from product reviews, surveys feedback, to business intelligence, and operational improvements. However, the majority of research efforts are devoted to English-language only, where information of great importance is also available in other languages. In this work, we focus on developing sentiment analysis models for Persian language, specifically for Persian movie reviews. Two deep learning models (deep autoencoders and deep CNNs) are developed and compared with the the state-of-the-art shallow MLP based machine learning model. Simulations results revealed the outperformance of our proposed CNN model over autoencoders and MLP. In future, we intend to exploit more advanced deep learning models such as Long Short-Term Memory (LSTM) and LSTM-CNNs to further evaluate the performance of our developed novel Persian dataset.
Acknowledgment
Amir Hussain and Ahsan Adeel were supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant No.EP/M026981/1. | CNN |
4140d8b5a78aea985546aa1e323de12f63d24add | 4140d8b5a78aea985546aa1e323de12f63d24add_0 | Q: By how much did the results improve?
Text: Introduction
In recent years, social media, forums, blogs and other forms of online communication tools have radically affected everyday life, especially how people express their opinions and comments. The extraction of useful information (such as people's opinion about companies brand) from the huge amount of unstructured data is vital for most companies and organizations BIBREF0 . The product reviews are important for business owners as they can take business decision accordingly to automatically classify user’s opinions towards products and services. The application of sentiment analysis is not limited to product or movie reviews but can be applied to different fields such as news, politics, sport etc. For example, in online political debates, the sentiment analysis can be used to identify people's opinions on a certain election candidate or political parties BIBREF1 BIBREF2 BIBREF3 . In this context, sentiment analysis has been widely used in different languages by using traditional and advanced machine learning techniques. However, limited research has been conducted to develop models for the Persian language.
The sentiment analysis is a method to automatically process large amounts of data and classify text into positive or negative sentiments) BIBREF4 BIBREF5 . Sentiment analysis can be performed at two levels: at the document level or at the sentence level. At document level it is used to classify the sentiment expressed in the document (positive or negative), whereas, at sentence level is used to identify the sentiments expressed only in the sentence under analysis BIBREF6 BIBREF7 .
In the literature, deep learning based automated feature extraction has been shown to outperform state-of-the-art manual feature engineering based classifiers such as Support Vector Machine (SVM), Naive Bayes (NB) or Multilayer Perceptron (MLP) etc. One of the important techniques in deep learning is the autoencoder that generally involves reducing the number of feature dimensions under consideration. The aim of dimensionality reduction is to obtain a set of principal variables to improve the performance of the approach. Similarly, CNNs have been proven to be very effective in sentiment analysis. However, little work has been carried out to exploit deep learning based feature representation for Persian sentiment analysis BIBREF8 BIBREF9 . In this paper, we present two deep learning models (deep autoencoders and CNNs) for Persian sentiment analysis. The obtained deep learning results are compared with MLP.
The rest of the paper is organized as follows: Section 2 presents related work. Section 3 presents methodology and experimental results. Finally, section 4 concludes this paper.
Related Works
In the literature, extensive research has been carried out to model novel sentiment analysis models using both shallow and deep learning algorithms. For example, the authors in BIBREF10 proposed a novel deep learning approach for polarity detection in product reviews. The authors addressed two major limitations of stacked denoising of autoencoders, high computational cost and the lack of scalability of high dimensional features. Their experimental results showed the effectiveness of proposed autoencoders in achieving accuracy upto 87%. Zhai et al., BIBREF11 proposed a five layers autoencoder for learning the specific representation of textual data. The autoencoders are generalised using loss function and derived discriminative loss function from label information. The experimental results showed that the model outperformed bag of words, denoising autoencoders and other traditional methods, achieving accuracy rate up to 85% . Sun et al., BIBREF12 proposed a novel method to extract contextual information from text using a convolutional autoencoder architecture. The experimental results showed that the proposed model outperformed traditional SVM and Nave Bayes models, reporting accuracy of 83.1 %, 63.9% and 67.8% respectively.
Su et al., BIBREF13 proposed an approach for a neural generative autoencoder for learning bilingual word embedding. The experimental results showed the effectiveness of their approach on English-Chinese, English-German, English-French and English-Spanish (75.36% accuracy). Kim et al., BIBREF14 proposed a method to capture the non-linear structure of data using CNN classifier. The experimental results showed the effectiveness of the method on the multi-domain dataset (movie reviews and product reviews). However, the disadvantage is only SVM and Naive Bayes classifiers are used to evaluate the performance of the method and deep learning classifiers are not exploited. Zhang et al., BIBREF15 proposed an approach using deep learning classifiers to detect polarity in Japanese movie reviews. The approach used denoising autoencoder and adapted to other domains such as product reviews. The advantage of the approach is not depended on any language and could be used for various languages by applying different datasets. AP et al., BIBREF16 proposed a CNN based model for cross-language learning of vectorial word representations that is coherent between two languages. The method is evaluated using English and German movie reviews dataset. The experimental results showed CNN (83.45% accuracy) outperformed as compared to SVM (65.25% accuracy).
Zhou et al., BIBREF17 proposed an autoencoder architecture constituting an LSTM-encoder and decoder in order to capture features in the text and reduce dimensionality of data. The LSTM encoder used the interactive scheme to go through the sequence of sentences and LSTM decoder reconstructed the vector of sentences. The model is evaluated using different datasets such as book reviews, DVD reviews, and music reviews, acquiring accuracy up to 81.05%, 81.06%, and 79.40% respectively. Mesnil et al., BIBREF18 proposed an approach using ensemble classification to detect polarity in the movie reviews. The authors combined several machine learning algorithms such as SVM, Naive Bayes and RNN to achieve better results, where autoencoders were used to reduce the dimensionality of features. The experimental results showed the combination of unigram, bigram and trigram features (91.87% accuracy) outperformed unigram (91.56% accuracy) and bigram (88.61% accuracy).
Scheible et al., BIBREF19 trained an approach using semi-supervised recursive autoencoder to detect polarity in movie reviews dataset, consisted of 5000 positive and 5000 negative sentiments. The experimental results demonstrated that the proposed approach successfully detected polarity in movie reviews dataset (83.13% accuracy) and outperformed standard SVM (68.36% accuracy) model. Dai et al., BIBREF20 developed an autoencoder to detect polarity in the text using deep learning classifier. The LSTM was trained on IMDB movie reviews dataset. The experimental results showed the outperformance of their proposed approach over SVM. In table 1 some of the autoencoder approaches are depicted.
Methodology and Experimental Results
The novel dataset used in this work was collected manually and includes Persian movie reviews from 2014 to 2016. A subset of dataset was used to train the neural network (60% training dataset) and rest of the data (40%) was used to test and validate the performance of the trained neural network (testing set (30%), validation set (10%)). There are two types of labels in the dataset: positive or negative. The reviews were manually annotated by three native Persian speakers aged between 30 and 50 years old.
After data collection, the corpus was pre-processed using tokenisation, normalisation and stemming techniques. The process of converting sentences into single word or token is called tokenisation. For example, "The movie is great" is changed to "The", "movie", "is", "great" BIBREF21 . There are some words which contain numbers. For example, "great" is written as "gr8" or "gooood" as written as "good" . The normalisation is used to convert these words into normal forms BIBREF22 . The process of converting words into their root is called stemming. For example, going was changed to go BIBREF23 . Words were converted into vectors. The fasttext was used to convert each word into 300-dimensions vectors. Fasttext is a library for text classification and representation BIBREF24 BIBREF25 BIBREF9 .
For classification, MLP, autoencoders and CNNs have been used. Fig. 1. depicts the modelled MLP architectures. MLP classifer was trained for 100 iterations BIBREF26 . Fig. 2. depicts the modelled autoencoder architecture. Autoencoder is a feed-forward deep neural network with unsupervised learning and it is used for dimensionality reduction. The autoencoder consists of input, output and hidden layers. Autoencoder is used to compress the input into a latent-space and then the output is reconstructed BIBREF27 BIBREF28 BIBREF29 . The exploited autoencoder model is depcited in Fig. 1. The autoencoder consists of one input layer three hidden layers (1500, 512, 1500) and an output layer. Convolutional Neural Networks contains three layers (input, hidden and output layer). The hidden layer consists of convolutional layers, pooling layers, fully connected layers and normalisation layer. The INLINEFORM0 is denotes the hidden neurons of j, with bias of INLINEFORM1 , is a weight sum over continuous visible nodes v which is given by: DISPLAYFORM0
The modelled CNN architecture is depicted in Fig. 3 BIBREF29 BIBREF28 . For CNN modelling, each utterance was represented as a concatenation vector of constituent words. The network has total 11 layers: 4 convolution layers, 4 max pooling and 3 fully connected layers. Convolution layers have filters of size 2 and with 15 feature maps. Each convolution layer is followed by a max polling layer with window size 2. The last max pooling layer is followed by fully connected layers of size 5000, 500 and 4. For final layer, softmax activation is used.
To evaluate the performance of the proposed approach, precision (1), recall (2), f-Measure (3), and prediction accuracy (4) have been used as a performance matrices. The experimental results are shown in Table 1, where it can be seen that autoencoders outperformed MLP and CNN outperformed autoencoders with the highest achieved accuracy of 82.6%. DISPLAYFORM0 DISPLAYFORM1
where TP is denotes true positive, TN is true negative, FP is false positive, and FN is false negative.
Conclusion
Sentiment analysis has been used extensively for a wide of range of real-world applications, ranging from product reviews, surveys feedback, to business intelligence, and operational improvements. However, the majority of research efforts are devoted to English-language only, where information of great importance is also available in other languages. In this work, we focus on developing sentiment analysis models for Persian language, specifically for Persian movie reviews. Two deep learning models (deep autoencoders and deep CNNs) are developed and compared with the the state-of-the-art shallow MLP based machine learning model. Simulations results revealed the outperformance of our proposed CNN model over autoencoders and MLP. In future, we intend to exploit more advanced deep learning models such as Long Short-Term Memory (LSTM) and LSTM-CNNs to further evaluate the performance of our developed novel Persian dataset.
Acknowledgment
Amir Hussain and Ahsan Adeel were supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant No.EP/M026981/1. | Unanswerable |