id
stringlengths 40
40
| pid
stringlengths 42
42
| input
stringlengths 8.37k
169k
| output
stringlengths 1
1.63k
|
---|---|---|---|
9a4aa0e4096c73cd2c3b1eab437c1bf24ae7bf03 | 9a4aa0e4096c73cd2c3b1eab437c1bf24ae7bf03_0 | Q: What text sequences are associated with each vertex?
Text: Introduction
Networks are ubiquitous, with prominent examples including social networks (e.g., Facebook, Twitter) or citation networks of research papers (e.g., arXiv). When analyzing data from these real-world networks, traditional methods often represent vertices (nodes) as one-hot representations (containing the connectivity information of each vertex with respect to all other vertices), usually suffering from issues related to the inherent sparsity of large-scale networks. This results in models that are not able to fully capture the relationships between vertices of the network BIBREF0 , BIBREF1 . Alternatively, network embedding (i.e., network representation learning) has been considered, representing each vertex of a network with a low-dimensional vector that preserves information on its similarity relative to other vertices. This approach has attracted considerable attention in recent years BIBREF2 , BIBREF0 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 .
Traditional network embedding approaches focus primarily on learning representations of vertices that preserve local structure, as well as internal structural properties of the network. For instance, Isomap BIBREF9 , LINE BIBREF3 , and Grarep BIBREF10 were proposed to preserve first-, second-, and higher-order proximity between nodes, respectively. DeepWalk BIBREF0 , which learns vertex representations from random-walk sequences, similarly, only takes into account structural information of the network. However, in real-world networks, vertices usually contain rich textual information (e.g., user profiles in Facebook, paper abstracts in arXiv, user-generated content on Twitter, etc.), which may be leveraged effectively for learning more informative embeddings.
To address this opportunity, BIBREF11 proposed text-associated DeepWalk, to incorporate textual information into the vectorial representations of vertices (embeddings). BIBREF12 employed deep recurrent neural networks to integrate the information from vertex-associated text into network representations. Further, BIBREF13 proposed to more effectively model the semantic relationships between vertices using a mutual attention mechanism.
Although these methods have demonstrated performance gains over structure-only network embeddings, the relationship between text sequences for a pair of vertices is accounted for solely by comparing their sentence embeddings. However, as shown in Figure 1 , to assess the similarity between two research papers, a more effective strategy would compare and align (via local-weighting) individual important words (keywords) within a pair of abstracts, while information from other words (e.g., stop words) that tend to be less relevant can be effectively ignored (down-weighted). This alignment mechanism is difficult to accomplish in models where text sequences are first embedded into a common space and then compared in pairs BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 .
We propose to learn a semantic-aware Network Embedding (NE) that incorporates word-level alignment features abstracted from text sequences associated with vertex pairs. Given a pair of sentences, our model first aligns each word within one sentence with keywords from the other sentence (adaptively up-weighted via an attention mechanism), producing a set of fine-grained matching vectors. These features are then accumulated via a simple but efficient aggregation function, obtaining the final representation for the sentence. As a result, the word-by-word alignment features (as illustrated in Figure 1 ) are explicitly and effectively captured by our model. Further, the learned network embeddings under our framework are adaptive to the specific (local) vertices that are considered, and thus are context-aware and especially suitable for downstream tasks, such as link prediction. Moreover, since the word-by-word matching procedure introduced here is highly parallelizable and does not require any complex encoding networks, such as Long Short-Term Memory (LSTM) or Convolutional Neural Networks (CNNs), our framework requires significantly less time for training, which is attractive for large-scale network applications.
We evaluate our approach on three real-world datasets spanning distinct network-embedding-based applications: link prediction, vertex classification and visualization. We show that the proposed word-by-word alignment mechanism efficiently incorporates textual information into the network embedding, and consistently exhibits superior performance relative to several competitive baselines. Analyses considering the extracted word-by-word pairs further validate the effectiveness of the proposed framework.
Problem Definition
A network (graph) is defined as $G = \lbrace V,E\rbrace $ , where $V$ and $E$ denote the set of $N$ vertices (nodes) and edges, respectively, where elements of $E$ are two-element subsets of $V$ . Here we only consider undirected networks, however, our approach (introduced below) can be readily extended to the directed case. We also define $W$ , the symmetric $\mathbb {R}^{N \times N}$ matrix whose elements, $w_{ij}$ , denote the weights associated with edges in $V$ , and $V$0 , the set of text sequences assigned to each vertex. Edges and weights contain the structural information of the network, while the text can be used to characterize the semantic properties of each vertex. Given network $V$1 , with the network embedding we seek to encode each vertex into a low-dimensional vector $V$2 (with dimension much smaller than $V$3 ), while preserving structural and semantic features of $V$4 .
Framework Overview
To incorporate both structural and semantic information into the network embeddings, we specify two types of (latent) embeddings: ( $i$ ) $_s$ , the structural embedding; and ( $ii$ ) $_t$ , the textual embedding. Specifically, each vertex in $G$ is encoded into a low-dimensional embedding $= [_s; _t]$ . To learn these embeddings, we specify an objective that leverages the information from both $W$ and $T$ , denoted as
$$= \sum _{e \in E} _{\textrm {struct}}(e) + _{\textrm {text}}(e) + _{\textrm {joint}}(e) \,,$$ (Eq. 4)
where $_{\textrm {struct}}$ , $_{\textrm {text}}$ and $_{\textrm {joint}}$ denote structure, text, and joint structure-text training losses, respectively. For a vertex pair $\lbrace v_i,v_j\rbrace $ weighted by $w_{ij}$ , $_{\textrm {struct}}(v_i, v_j)$ in ( 4 ) is defined as BIBREF3
$$_{\textrm {struct}}(v_i, v_j) = w_{ij} \log p(^i_s|^j_{s}) \,,$$ (Eq. 5)
where $p(^i_s|^j_{s})$ denotes the conditional probability between structural embeddings for vertices $\lbrace v_i,v_j\rbrace $ . To leverage the textual information in $T$ , similar text-specific and joint structure-text training objectives are also defined
$$_{\textrm {text}}(v_i, v_j) & = w_{ij} \alpha _1 \log p(^i_t|^j_{t}) \,, \\ _{\textrm {joint}}(v_i, v_j) & = w_{ij} \alpha _2 \log p(^i_t|^j_{s}) \\ & + w_{ij}\alpha _3 \log p(^i_s|^j_{t}) \,,$$ (Eq. 6)
where $p(^i_t|^j_t)$ and $p(^i_t|^j_s)$ (or $p(^i_s|^j_t)$ ) denote the conditional probability for a pair of text embeddings and text embedding given structure embedding (or vice versa), respectively, for vertices $\lbrace v_i,v_j\rbrace $ . Further, $\alpha _1$ , $\alpha _2$ and $\alpha _3$ are hyperparameters that balance the impact of the different training-loss components. Note that structural embeddings, $_s$ , are treated directly as parameters, while the text embeddings $_t$ are learned based on the text sequences associated with vertices.
For all conditional probability terms, we follow BIBREF3 and consider the second-order proximity between vertex pairs. Thus, for vertices $\lbrace v_i,v_j\rbrace $ , the probability of generating $_i$ conditioned on $_j$ may be written as
$$p(^i|^j) = \frac{\exp \left({^j}^T ^i\right)}{\textstyle {\sum }_{k=1}^{N}\exp \left({^j}^T ^k\right)} \,.$$ (Eq. 7)
Note that ( 7 ) can be applied to both structural and text embeddings in ( 5 ) and ( 6 ).
Inspired by BIBREF13 , we further assume that vertices in the network play different roles depending on the vertex with which they interact. Thus, for a given vertex, the text embedding, $_t$ , is adaptive (specific) to the vertex it is being conditioned on. This type of context-aware textual embedding has demonstrated superior performance relative to context-free embeddings BIBREF13 . In the following two sections, we describe our strategy for encoding the text sequence associated with an edge into its adaptive textual embedding, via word-by-context and word-by-word alignments.
Word-by-Context Alignment
We first introduce our base model, which re-weights the importance of individual words within a text sequence in the context of the edge being considered. Consider text sequences associated with two vertices connected by an edge, denoted $t_a$ and $t_b$ and contained in $T$ . Text sequences $t_a$ and $t_b$ are of lengths $M_a$ and $M_b$ , respectively, and are represented by $_a\in \mathbb {R}^{d\times M_a}$ and $_b\in \mathbb {R}^{d\times M_b}$ , respectively, where $d$ is the dimension of the word embedding. Further, $t_b$0 denotes the embedding of the $t_b$1 -th word in sequence $t_b$2 .
Our goal is to encode text sequences $t_a$ and $t_b$ into counterpart-aware vectorial representations $_a$ and $_b$ . Thus, while inferring the adaptive textual embedding for sentence $t_a$ , we propose re-weighting the importance of each word in $t_a$ to explicitly account for its alignment with sentence $t_b$ . The weight $\alpha _i$ , corresponding to the $i$ -th word in $t_a$ , is generated as:
$$\alpha _i = \frac{\exp (\tanh (_1 _b + _2 ^{(i)}_a))}{\sum _{j = 1}^{M_a} \exp (\tanh (_1 _b + _2 ^{(j)}_a))} \,,$$ (Eq. 9)
where $_1$ and $_2$ are model parameters and $_b = \sum _{i = 1}^{M_b} x^b_i$ is the context vector of sequence $t_b$ , obtained by simply averaging over all the word embeddings in the sequence, similar to fastText BIBREF19 . Further, the word-by-context embedding for sequence $t_a$ is obtained by taking the weighted average over all word embeddings
$$h_a = \textstyle {\sum }_{i = 1}^{M_a} \alpha _i ^{(i)}_a \,.$$ (Eq. 10)
Intuitively, $\alpha _i$ may be understood as the relevance score between the $i$ th word in $t_a$ and sequence $t_b$ . Specifically, keywords within $t_a$ , in the context of $t_b$ , should be assigned larger weights, while less important words will be correspondingly down-weighted. Similarly, $h_b$ is encoded as a weighted embedding using ( 9 ) and ( 10 ).
Fine-Grained Word-by-Word Alignment
With the alignment in the previous section, word-by-context matching features $\alpha _i$ are modeled; however, the word-by-word alignment information (fine-grained), which is key to characterize the relationship between two vertices (as discussed in the above), is not explicitly captured. So motivated, we further propose an architecture to explicitly abstract word-by-word alignment information from $t_a$ and $t_b$ , to learn the relationship between the two vertices. This is inspired by the recent success of Relation Networks (RNs) for relational reasoning BIBREF20 .
As illustrated in Figure 2 , given two input embedding matrices $_a$ and $_b$ , we first compute the affinity matrix $\in \mathbb {R}^{M_b\times M_a}$ , whose elements represent the affinity scores corresponding to all word pairs between sequences $t_a$ and $t_b$
$$= ^T_b_a \,.$$ (Eq. 13)
Subsequently, we compute the context-aware matrix for sequence $t_b$ as
$$_b = \textrm {softmax}() \,, \qquad \widetilde{}_b = _b_b \,,$$ (Eq. 14)
where the $\textrm {softmax}(\cdot )$ function is applied column-wise to $$ , and thus $_b$ contains the attention weights (importance scores) across sequence $t_b$ (columns), which account for each word in sequence $t_a$ (rows). Thus, matrix $\widetilde{}_b \in \mathbb {R}^{d\times M_a}$ in ( 14 ) constitutes an attention-weighted embedding for $_b$ . Specifically, the $i$ -th column of $\widetilde{}_b$ , denoted as $\widetilde{}^{(i)}_b$ , can be understood as a weighted average over all the words in $$0 , where higher attention weights indicate better alignment (match) with the $$1 -th word in $$2 .
To abstract the word-by-word alignments, we compare $^{(i)}_a$ with $\widetilde{}^{(i)}_b$ , for $i=1,2,...,M_a$ , to obtain the corresponding matching vector
$$^{(i)}_a=f_{\textrm {align}}\left(^{(i)}_a,\widetilde{}^{(i)}_b\right) \,,$$ (Eq. 15)
where $f_{\textrm {align}}(\cdot )$ represents the alignment function. Inspired by the observation in BIBREF16 that simple comparison/alignment functions based on element-wise operations exhibit excellent performance in matching text sequences, here we use a combination of element-wise subtraction and multiplication as $ f_{\textrm {align}}(^{(i)}_a,\widetilde{}^{(i)}_a) = [^{(i)}_a - \widetilde{}^{(i)}_a; ^{(i)}_a \odot \widetilde{}^{(i)}_a] \,, $
where $\odot $ denotes the element-wise Hadamard product, then these two operations are concatenated to produce the matching vector $^{(i)}_a$ . Note these operators may be used individually or combined as we will investigate in our experiments.
Subsequently, matching vectors from ( 15 ) are aggregated to produce the final textual embedding $_t^a$ for sequence $t_a$ as
$$_t^a=f_{\textrm {aggregate}}\left(^{(1)}_a,^{(2)}_a,...,^{(M_a)}_a\right) \,,$$ (Eq. 16)
where $f_{\textrm {aggregate}}$ denotes the aggregation function, which we specify as the max-pooling pooling operation. Notably, other commutative operators, such as summation or average pooling, can be otherwise employed. Although these aggregation functions are simple and invariant to the order of words in input sentences, they have been demonstrated to be highly effective in relational reasoning BIBREF15 , BIBREF20 . To further explore this, in Section "Ablation Study" , we conduct an ablation study comparing different choices of alignment and aggregation functions.
The representation $_b$ can be obtained in a similar manner through ( 13 ), ( 14 ), ( 15 ) and ( 16 ), but replacing ( 13 ) with $= ^T_a_b$ (its transpose). Note that this word-by-word alignment is more computationally involved than word-by-context; however, the former has substantially fewer parameters to learn, provided we no longer have to estimate the parameters in ( 9 ).
Training and Inference
For large-scale networks, computing and optimizing the conditional probabilities in ( 4 ) using ( 7 ) is computationally prohibitive, since it requires the summation over all vertices $V$ in $G$ . To address this limitation, we leverage the negative sampling strategy introduced by BIBREF21 , i.e., we perform computations by sampling a subset of negative edges. As a result, the conditional in ( 7 ) can be rewritten as: $ \begin{aligned} p(^i|^j) & = \log \sigma \left({^j}^T ^i\right) \\ & + \sum _{i=1}^{K} \mathbb {E}_{^i\sim P(v)}\left[\log \sigma (-{^j}^T ^i)\right] \,, \end{aligned} $
where $\sigma (x) = 1/(1+\exp (-x))$ is the sigmoid function. Following BIBREF21 , we set the noise distribution $P(v) \propto d_v^{3/4}$ , where $d_v$ is the out-degree of vertex $v\in V$ . The number of negative samples $K$ is treated as a hyperparameter. We use Adam BIBREF22 to update the model parameters while minimizing the objective in ( 4 ).
Related Work
Network embedding methods can be divided into two categories: (i) methods that solely rely on the structure, e.g., vertex information; and (ii) methods that leverage both the structure the network and the information associated with its vertices.
For the first type of models, DeepWalk BIBREF0 has been proposed to learn node representations by generating node contexts via truncated random walks; it is similar to the concept of Skip-Gram BIBREF21 , originally introduced for learning word embeddings. LINE BIBREF3 proposed a principled objective to explicitly capture first-order and second-order proximity information from the vertices of a network. Further, BIBREF4 introduced a biased random walk procedure to generate the neighborhood for a vertex, which infers the node representations by maximizing the likelihood of preserving the local context information of vertices. However, these algorithms generally ignore rich heterogeneous information associated with vertices. Here, we focus on incorporating textual information into network embeddings.
To learn semantic-aware network embeddings, Text-Associated DeepWalk (TADW) BIBREF11 proposed to integrate textual features into network representations with matrix factorization, by leveraging the equivalence between DeepWalk and matrix factorization. CENE (Content-Enhanced Network Embedding) BIBREF12 used bidirectional recurrent neural networks to abstract the semantic information associated with vertices, which further demonstrated the advantages of employing textual information. To capture the interaction between sentences of vertex pairs, BIBREF13 further proposed Context-Aware Network Embedding (CANE), that employs a mutual attention mechanism to adaptively account for the textual information from neighboring vertices. Despite showing improvement over structure-only models, these semantic-aware methods cannot capture word-level alignment information, which is important for inferring the relationship between node pairs, as previously discussed. In this work, we introduce a Word-Alignment-based Network Embedding (WANE) framework, which aligns and aggregates word-by-word matching features in an explicit manner, to obtain more informative network representations.
Experimental Results
We experiment with three variants for our WANE model: (i) WANE: where the word embeddings of each text sequence are simply average to obtain the sentence representations, similar to BIBREF19 , BIBREF25 . (ii) WANE-wc: where the textual embeddings are inferred with word-by-context alignment. (iii) WANE-ww: where the word-by-word alignment mechanism is leveraged to capture word-by-word matching features between available sequence pairs.
Link Prediction
Table 1 presents link prediction results for all models on Cora dataset, where different ratios of edges are used for training. It can be observed that when only a small number of edges are available, e.g., $15\%$ , the performances of structure-only methods is much worse than semantic-aware models that have taken textual information into consideration The perfromance gap tends to be smaller when a larger proportion of edges are employed for training. This highlights the importance of incorporating associated text sequences into network embeddings, especially in the case of representing a relatively sparse network. More importantly, the proposed WANE-ww model consistently outperforms other semantic-aware NE models by a substantial margin, indicating that our model better abstracts word-by-word alignment features from the text sequences available, thus yields more informative network representations.
Further, WANE-ww also outperforms WANE or WANE-wc on a wide range of edge training proportions. This suggests that: (i) adaptively assigning different weights to each word within a text sequence (according to its paired sequence) tends to be a better strategy than treating each word equally (as in WANE). (ii) Solely considering the context-by-word alignment features (as in WANE-wc) is not as efficient as abstracting word-by-word matching information from text sequences. We observe the same trend and the superiority of our WANE-ww models on the other two datasets, HepTh and Zhihu datasets, as shown in Table 2 and 3 , respectively.
Multi-label Vertex Classification
We further evaluate the effectiveness of proposed framework on vertex classification tasks with the Cora dataset. Similar to BIBREF13 , we generate the global embedding for each vertex by taking the average over its context-aware embeddings with all other connected vertices. As shown in Figure 3 (c), semantic-aware NE methods (including naive combination, TADW, CENE, CANE) exhibit higher test accuracies than semantic-agnostic models, demonstrating the advantages of incorporating textual information. Moreover, WANE-ww consistently outperforms other competitive semantic-aware models on a wide range of labeled proportions, suggesting that explicitly capturing word-by-word alignment features is not only useful for vertex-pair-based tasks, such as link prediction, but also results in better global embeddings which are required for vertex classification tasks. These observations further demonstrate that WANE-ww is an effective and robust framework to extract informative network representations.
We further consider the case where the training ratio is less than $10\%$ , and evaluate the learned network embedding with a semi-supervised classifier. Following BIBREF11 , we employ a Transductive SVM (TSVM) classifier with a linear kernel BIBREF26 for fairness. As illustrated in Table 4 , the proposed WANE-ww model exhibits superior performances in most cases. This may be due to the fact that WANE-ww extracts information from the vertices and text sequences jointly, thus the obtained vertex embeddings are less noisy and perform more consistently with relatively small training ratios BIBREF11 .
Ablation Study
Motivated by the observation in BIBREF16 that the advantages of different functions to match two vectors vary from task to task, we further explore the choice of alignment and aggregation functions in our WANE-ww model. To match the word pairs between two sequences, we experimented with three types of operations: subtraction, multiplication, and Sub & Multi (the concatenation of both approaches). As shown in Figure 3 (a) and 3 (b), element-wise subtraction tends to be the most effective operation performance-wise on both Cora and Zhihu datasets, and performs comparably to Sub & Multi on the HepTh dataset. This finding is consistent with the results in BIBREF16 , where they found that simple comparison functions based on element-wise operations work very well on matching text sequences.
In terms of the aggregation functions, we compare (one-layer) CNN, mean-pooling, and max-pooling operations to accumulate the matching vectors. As shown in Figure 3 (b), max-pooling has the best empirical results on all three datasets. This may be attributed to the fact that the max-pooling operation is better at selecting important word-by-word alignment features, among all matching vectors available, to infer the relationship between vertices.
Qualitative Analysis
To visualize the learned network representations, we further employ $t$ -SNE to map the low-dimensional vectors of the vertices to a 2-D embedding space. We use the Cora dataset because there are labels associated with each vertex and WANE-ww to obtain the network embeddings.
As shown in Figure 4 where each point indicates one paper (vertex), and the color of each point indicates the category it belongs to, the embeddings of the same label are indeed very close in the 2-D plot, while those with different labels are relatively farther from each other. Note that the model is not trained with any label information, indicating that WANE-ww has extracted meaningful patterns from the text and vertex information available.
The proposed word-by-word alignment mechanism can be used to highlight the most informative words (and the corresponding matching features) wrt the relationship between vertices. We visualize the norm of matching vector obtained in ( 15 ) in Figure 5 for the Cora dataset. It can be observed that matched key words, e.g., `MCMC', `convergence', between the text sequences are indeed assigned higher values in the matching vectors. These words would be selected preferentially by the final max-pooling aggregation operation. This indicates that WANE-ww is able to abstract important word-by-word alignment features from paired text sequences.
Conclusions
We have presented a novel framework to incorporate the semantic information from vertex-associated text sequences into network embeddings. An align-aggregate framework is introduced, which first aligns a sentence pair by capturing the word-by-word matching features, and then adaptively aggregating these word-level alignment information with an efficient max-pooling function. The semantic features abstracted are further encoded, along with the structural information, into a shared space to obtain the final network embedding. Compelling experimental results on several tasks demonstrated the advantages of our approach. In future work, we aim to leverage abundant unlabeled text data to abstract more informative sentence representations BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 . Another interesting direction is to learn binary and compact network embedding, which could be more efficient in terms of both computation and memory, relative to its continuous counterpart BIBREF31 . | abstracts, sentences |
1d1ab5d8a24dfd15d95a5a7506ac0456d1192209 | 1d1ab5d8a24dfd15d95a5a7506ac0456d1192209_0 | Q: How long does it take for the model to run?
Text: Introduction
Networks are ubiquitous, with prominent examples including social networks (e.g., Facebook, Twitter) or citation networks of research papers (e.g., arXiv). When analyzing data from these real-world networks, traditional methods often represent vertices (nodes) as one-hot representations (containing the connectivity information of each vertex with respect to all other vertices), usually suffering from issues related to the inherent sparsity of large-scale networks. This results in models that are not able to fully capture the relationships between vertices of the network BIBREF0 , BIBREF1 . Alternatively, network embedding (i.e., network representation learning) has been considered, representing each vertex of a network with a low-dimensional vector that preserves information on its similarity relative to other vertices. This approach has attracted considerable attention in recent years BIBREF2 , BIBREF0 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 .
Traditional network embedding approaches focus primarily on learning representations of vertices that preserve local structure, as well as internal structural properties of the network. For instance, Isomap BIBREF9 , LINE BIBREF3 , and Grarep BIBREF10 were proposed to preserve first-, second-, and higher-order proximity between nodes, respectively. DeepWalk BIBREF0 , which learns vertex representations from random-walk sequences, similarly, only takes into account structural information of the network. However, in real-world networks, vertices usually contain rich textual information (e.g., user profiles in Facebook, paper abstracts in arXiv, user-generated content on Twitter, etc.), which may be leveraged effectively for learning more informative embeddings.
To address this opportunity, BIBREF11 proposed text-associated DeepWalk, to incorporate textual information into the vectorial representations of vertices (embeddings). BIBREF12 employed deep recurrent neural networks to integrate the information from vertex-associated text into network representations. Further, BIBREF13 proposed to more effectively model the semantic relationships between vertices using a mutual attention mechanism.
Although these methods have demonstrated performance gains over structure-only network embeddings, the relationship between text sequences for a pair of vertices is accounted for solely by comparing their sentence embeddings. However, as shown in Figure 1 , to assess the similarity between two research papers, a more effective strategy would compare and align (via local-weighting) individual important words (keywords) within a pair of abstracts, while information from other words (e.g., stop words) that tend to be less relevant can be effectively ignored (down-weighted). This alignment mechanism is difficult to accomplish in models where text sequences are first embedded into a common space and then compared in pairs BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 .
We propose to learn a semantic-aware Network Embedding (NE) that incorporates word-level alignment features abstracted from text sequences associated with vertex pairs. Given a pair of sentences, our model first aligns each word within one sentence with keywords from the other sentence (adaptively up-weighted via an attention mechanism), producing a set of fine-grained matching vectors. These features are then accumulated via a simple but efficient aggregation function, obtaining the final representation for the sentence. As a result, the word-by-word alignment features (as illustrated in Figure 1 ) are explicitly and effectively captured by our model. Further, the learned network embeddings under our framework are adaptive to the specific (local) vertices that are considered, and thus are context-aware and especially suitable for downstream tasks, such as link prediction. Moreover, since the word-by-word matching procedure introduced here is highly parallelizable and does not require any complex encoding networks, such as Long Short-Term Memory (LSTM) or Convolutional Neural Networks (CNNs), our framework requires significantly less time for training, which is attractive for large-scale network applications.
We evaluate our approach on three real-world datasets spanning distinct network-embedding-based applications: link prediction, vertex classification and visualization. We show that the proposed word-by-word alignment mechanism efficiently incorporates textual information into the network embedding, and consistently exhibits superior performance relative to several competitive baselines. Analyses considering the extracted word-by-word pairs further validate the effectiveness of the proposed framework.
Problem Definition
A network (graph) is defined as $G = \lbrace V,E\rbrace $ , where $V$ and $E$ denote the set of $N$ vertices (nodes) and edges, respectively, where elements of $E$ are two-element subsets of $V$ . Here we only consider undirected networks, however, our approach (introduced below) can be readily extended to the directed case. We also define $W$ , the symmetric $\mathbb {R}^{N \times N}$ matrix whose elements, $w_{ij}$ , denote the weights associated with edges in $V$ , and $V$0 , the set of text sequences assigned to each vertex. Edges and weights contain the structural information of the network, while the text can be used to characterize the semantic properties of each vertex. Given network $V$1 , with the network embedding we seek to encode each vertex into a low-dimensional vector $V$2 (with dimension much smaller than $V$3 ), while preserving structural and semantic features of $V$4 .
Framework Overview
To incorporate both structural and semantic information into the network embeddings, we specify two types of (latent) embeddings: ( $i$ ) $_s$ , the structural embedding; and ( $ii$ ) $_t$ , the textual embedding. Specifically, each vertex in $G$ is encoded into a low-dimensional embedding $= [_s; _t]$ . To learn these embeddings, we specify an objective that leverages the information from both $W$ and $T$ , denoted as
$$= \sum _{e \in E} _{\textrm {struct}}(e) + _{\textrm {text}}(e) + _{\textrm {joint}}(e) \,,$$ (Eq. 4)
where $_{\textrm {struct}}$ , $_{\textrm {text}}$ and $_{\textrm {joint}}$ denote structure, text, and joint structure-text training losses, respectively. For a vertex pair $\lbrace v_i,v_j\rbrace $ weighted by $w_{ij}$ , $_{\textrm {struct}}(v_i, v_j)$ in ( 4 ) is defined as BIBREF3
$$_{\textrm {struct}}(v_i, v_j) = w_{ij} \log p(^i_s|^j_{s}) \,,$$ (Eq. 5)
where $p(^i_s|^j_{s})$ denotes the conditional probability between structural embeddings for vertices $\lbrace v_i,v_j\rbrace $ . To leverage the textual information in $T$ , similar text-specific and joint structure-text training objectives are also defined
$$_{\textrm {text}}(v_i, v_j) & = w_{ij} \alpha _1 \log p(^i_t|^j_{t}) \,, \\ _{\textrm {joint}}(v_i, v_j) & = w_{ij} \alpha _2 \log p(^i_t|^j_{s}) \\ & + w_{ij}\alpha _3 \log p(^i_s|^j_{t}) \,,$$ (Eq. 6)
where $p(^i_t|^j_t)$ and $p(^i_t|^j_s)$ (or $p(^i_s|^j_t)$ ) denote the conditional probability for a pair of text embeddings and text embedding given structure embedding (or vice versa), respectively, for vertices $\lbrace v_i,v_j\rbrace $ . Further, $\alpha _1$ , $\alpha _2$ and $\alpha _3$ are hyperparameters that balance the impact of the different training-loss components. Note that structural embeddings, $_s$ , are treated directly as parameters, while the text embeddings $_t$ are learned based on the text sequences associated with vertices.
For all conditional probability terms, we follow BIBREF3 and consider the second-order proximity between vertex pairs. Thus, for vertices $\lbrace v_i,v_j\rbrace $ , the probability of generating $_i$ conditioned on $_j$ may be written as
$$p(^i|^j) = \frac{\exp \left({^j}^T ^i\right)}{\textstyle {\sum }_{k=1}^{N}\exp \left({^j}^T ^k\right)} \,.$$ (Eq. 7)
Note that ( 7 ) can be applied to both structural and text embeddings in ( 5 ) and ( 6 ).
Inspired by BIBREF13 , we further assume that vertices in the network play different roles depending on the vertex with which they interact. Thus, for a given vertex, the text embedding, $_t$ , is adaptive (specific) to the vertex it is being conditioned on. This type of context-aware textual embedding has demonstrated superior performance relative to context-free embeddings BIBREF13 . In the following two sections, we describe our strategy for encoding the text sequence associated with an edge into its adaptive textual embedding, via word-by-context and word-by-word alignments.
Word-by-Context Alignment
We first introduce our base model, which re-weights the importance of individual words within a text sequence in the context of the edge being considered. Consider text sequences associated with two vertices connected by an edge, denoted $t_a$ and $t_b$ and contained in $T$ . Text sequences $t_a$ and $t_b$ are of lengths $M_a$ and $M_b$ , respectively, and are represented by $_a\in \mathbb {R}^{d\times M_a}$ and $_b\in \mathbb {R}^{d\times M_b}$ , respectively, where $d$ is the dimension of the word embedding. Further, $t_b$0 denotes the embedding of the $t_b$1 -th word in sequence $t_b$2 .
Our goal is to encode text sequences $t_a$ and $t_b$ into counterpart-aware vectorial representations $_a$ and $_b$ . Thus, while inferring the adaptive textual embedding for sentence $t_a$ , we propose re-weighting the importance of each word in $t_a$ to explicitly account for its alignment with sentence $t_b$ . The weight $\alpha _i$ , corresponding to the $i$ -th word in $t_a$ , is generated as:
$$\alpha _i = \frac{\exp (\tanh (_1 _b + _2 ^{(i)}_a))}{\sum _{j = 1}^{M_a} \exp (\tanh (_1 _b + _2 ^{(j)}_a))} \,,$$ (Eq. 9)
where $_1$ and $_2$ are model parameters and $_b = \sum _{i = 1}^{M_b} x^b_i$ is the context vector of sequence $t_b$ , obtained by simply averaging over all the word embeddings in the sequence, similar to fastText BIBREF19 . Further, the word-by-context embedding for sequence $t_a$ is obtained by taking the weighted average over all word embeddings
$$h_a = \textstyle {\sum }_{i = 1}^{M_a} \alpha _i ^{(i)}_a \,.$$ (Eq. 10)
Intuitively, $\alpha _i$ may be understood as the relevance score between the $i$ th word in $t_a$ and sequence $t_b$ . Specifically, keywords within $t_a$ , in the context of $t_b$ , should be assigned larger weights, while less important words will be correspondingly down-weighted. Similarly, $h_b$ is encoded as a weighted embedding using ( 9 ) and ( 10 ).
Fine-Grained Word-by-Word Alignment
With the alignment in the previous section, word-by-context matching features $\alpha _i$ are modeled; however, the word-by-word alignment information (fine-grained), which is key to characterize the relationship between two vertices (as discussed in the above), is not explicitly captured. So motivated, we further propose an architecture to explicitly abstract word-by-word alignment information from $t_a$ and $t_b$ , to learn the relationship between the two vertices. This is inspired by the recent success of Relation Networks (RNs) for relational reasoning BIBREF20 .
As illustrated in Figure 2 , given two input embedding matrices $_a$ and $_b$ , we first compute the affinity matrix $\in \mathbb {R}^{M_b\times M_a}$ , whose elements represent the affinity scores corresponding to all word pairs between sequences $t_a$ and $t_b$
$$= ^T_b_a \,.$$ (Eq. 13)
Subsequently, we compute the context-aware matrix for sequence $t_b$ as
$$_b = \textrm {softmax}() \,, \qquad \widetilde{}_b = _b_b \,,$$ (Eq. 14)
where the $\textrm {softmax}(\cdot )$ function is applied column-wise to $$ , and thus $_b$ contains the attention weights (importance scores) across sequence $t_b$ (columns), which account for each word in sequence $t_a$ (rows). Thus, matrix $\widetilde{}_b \in \mathbb {R}^{d\times M_a}$ in ( 14 ) constitutes an attention-weighted embedding for $_b$ . Specifically, the $i$ -th column of $\widetilde{}_b$ , denoted as $\widetilde{}^{(i)}_b$ , can be understood as a weighted average over all the words in $$0 , where higher attention weights indicate better alignment (match) with the $$1 -th word in $$2 .
To abstract the word-by-word alignments, we compare $^{(i)}_a$ with $\widetilde{}^{(i)}_b$ , for $i=1,2,...,M_a$ , to obtain the corresponding matching vector
$$^{(i)}_a=f_{\textrm {align}}\left(^{(i)}_a,\widetilde{}^{(i)}_b\right) \,,$$ (Eq. 15)
where $f_{\textrm {align}}(\cdot )$ represents the alignment function. Inspired by the observation in BIBREF16 that simple comparison/alignment functions based on element-wise operations exhibit excellent performance in matching text sequences, here we use a combination of element-wise subtraction and multiplication as $ f_{\textrm {align}}(^{(i)}_a,\widetilde{}^{(i)}_a) = [^{(i)}_a - \widetilde{}^{(i)}_a; ^{(i)}_a \odot \widetilde{}^{(i)}_a] \,, $
where $\odot $ denotes the element-wise Hadamard product, then these two operations are concatenated to produce the matching vector $^{(i)}_a$ . Note these operators may be used individually or combined as we will investigate in our experiments.
Subsequently, matching vectors from ( 15 ) are aggregated to produce the final textual embedding $_t^a$ for sequence $t_a$ as
$$_t^a=f_{\textrm {aggregate}}\left(^{(1)}_a,^{(2)}_a,...,^{(M_a)}_a\right) \,,$$ (Eq. 16)
where $f_{\textrm {aggregate}}$ denotes the aggregation function, which we specify as the max-pooling pooling operation. Notably, other commutative operators, such as summation or average pooling, can be otherwise employed. Although these aggregation functions are simple and invariant to the order of words in input sentences, they have been demonstrated to be highly effective in relational reasoning BIBREF15 , BIBREF20 . To further explore this, in Section "Ablation Study" , we conduct an ablation study comparing different choices of alignment and aggregation functions.
The representation $_b$ can be obtained in a similar manner through ( 13 ), ( 14 ), ( 15 ) and ( 16 ), but replacing ( 13 ) with $= ^T_a_b$ (its transpose). Note that this word-by-word alignment is more computationally involved than word-by-context; however, the former has substantially fewer parameters to learn, provided we no longer have to estimate the parameters in ( 9 ).
Training and Inference
For large-scale networks, computing and optimizing the conditional probabilities in ( 4 ) using ( 7 ) is computationally prohibitive, since it requires the summation over all vertices $V$ in $G$ . To address this limitation, we leverage the negative sampling strategy introduced by BIBREF21 , i.e., we perform computations by sampling a subset of negative edges. As a result, the conditional in ( 7 ) can be rewritten as: $ \begin{aligned} p(^i|^j) & = \log \sigma \left({^j}^T ^i\right) \\ & + \sum _{i=1}^{K} \mathbb {E}_{^i\sim P(v)}\left[\log \sigma (-{^j}^T ^i)\right] \,, \end{aligned} $
where $\sigma (x) = 1/(1+\exp (-x))$ is the sigmoid function. Following BIBREF21 , we set the noise distribution $P(v) \propto d_v^{3/4}$ , where $d_v$ is the out-degree of vertex $v\in V$ . The number of negative samples $K$ is treated as a hyperparameter. We use Adam BIBREF22 to update the model parameters while minimizing the objective in ( 4 ).
Related Work
Network embedding methods can be divided into two categories: (i) methods that solely rely on the structure, e.g., vertex information; and (ii) methods that leverage both the structure the network and the information associated with its vertices.
For the first type of models, DeepWalk BIBREF0 has been proposed to learn node representations by generating node contexts via truncated random walks; it is similar to the concept of Skip-Gram BIBREF21 , originally introduced for learning word embeddings. LINE BIBREF3 proposed a principled objective to explicitly capture first-order and second-order proximity information from the vertices of a network. Further, BIBREF4 introduced a biased random walk procedure to generate the neighborhood for a vertex, which infers the node representations by maximizing the likelihood of preserving the local context information of vertices. However, these algorithms generally ignore rich heterogeneous information associated with vertices. Here, we focus on incorporating textual information into network embeddings.
To learn semantic-aware network embeddings, Text-Associated DeepWalk (TADW) BIBREF11 proposed to integrate textual features into network representations with matrix factorization, by leveraging the equivalence between DeepWalk and matrix factorization. CENE (Content-Enhanced Network Embedding) BIBREF12 used bidirectional recurrent neural networks to abstract the semantic information associated with vertices, which further demonstrated the advantages of employing textual information. To capture the interaction between sentences of vertex pairs, BIBREF13 further proposed Context-Aware Network Embedding (CANE), that employs a mutual attention mechanism to adaptively account for the textual information from neighboring vertices. Despite showing improvement over structure-only models, these semantic-aware methods cannot capture word-level alignment information, which is important for inferring the relationship between node pairs, as previously discussed. In this work, we introduce a Word-Alignment-based Network Embedding (WANE) framework, which aligns and aggregates word-by-word matching features in an explicit manner, to obtain more informative network representations.
Experimental Results
We experiment with three variants for our WANE model: (i) WANE: where the word embeddings of each text sequence are simply average to obtain the sentence representations, similar to BIBREF19 , BIBREF25 . (ii) WANE-wc: where the textual embeddings are inferred with word-by-context alignment. (iii) WANE-ww: where the word-by-word alignment mechanism is leveraged to capture word-by-word matching features between available sequence pairs.
Link Prediction
Table 1 presents link prediction results for all models on Cora dataset, where different ratios of edges are used for training. It can be observed that when only a small number of edges are available, e.g., $15\%$ , the performances of structure-only methods is much worse than semantic-aware models that have taken textual information into consideration The perfromance gap tends to be smaller when a larger proportion of edges are employed for training. This highlights the importance of incorporating associated text sequences into network embeddings, especially in the case of representing a relatively sparse network. More importantly, the proposed WANE-ww model consistently outperforms other semantic-aware NE models by a substantial margin, indicating that our model better abstracts word-by-word alignment features from the text sequences available, thus yields more informative network representations.
Further, WANE-ww also outperforms WANE or WANE-wc on a wide range of edge training proportions. This suggests that: (i) adaptively assigning different weights to each word within a text sequence (according to its paired sequence) tends to be a better strategy than treating each word equally (as in WANE). (ii) Solely considering the context-by-word alignment features (as in WANE-wc) is not as efficient as abstracting word-by-word matching information from text sequences. We observe the same trend and the superiority of our WANE-ww models on the other two datasets, HepTh and Zhihu datasets, as shown in Table 2 and 3 , respectively.
Multi-label Vertex Classification
We further evaluate the effectiveness of proposed framework on vertex classification tasks with the Cora dataset. Similar to BIBREF13 , we generate the global embedding for each vertex by taking the average over its context-aware embeddings with all other connected vertices. As shown in Figure 3 (c), semantic-aware NE methods (including naive combination, TADW, CENE, CANE) exhibit higher test accuracies than semantic-agnostic models, demonstrating the advantages of incorporating textual information. Moreover, WANE-ww consistently outperforms other competitive semantic-aware models on a wide range of labeled proportions, suggesting that explicitly capturing word-by-word alignment features is not only useful for vertex-pair-based tasks, such as link prediction, but also results in better global embeddings which are required for vertex classification tasks. These observations further demonstrate that WANE-ww is an effective and robust framework to extract informative network representations.
We further consider the case where the training ratio is less than $10\%$ , and evaluate the learned network embedding with a semi-supervised classifier. Following BIBREF11 , we employ a Transductive SVM (TSVM) classifier with a linear kernel BIBREF26 for fairness. As illustrated in Table 4 , the proposed WANE-ww model exhibits superior performances in most cases. This may be due to the fact that WANE-ww extracts information from the vertices and text sequences jointly, thus the obtained vertex embeddings are less noisy and perform more consistently with relatively small training ratios BIBREF11 .
Ablation Study
Motivated by the observation in BIBREF16 that the advantages of different functions to match two vectors vary from task to task, we further explore the choice of alignment and aggregation functions in our WANE-ww model. To match the word pairs between two sequences, we experimented with three types of operations: subtraction, multiplication, and Sub & Multi (the concatenation of both approaches). As shown in Figure 3 (a) and 3 (b), element-wise subtraction tends to be the most effective operation performance-wise on both Cora and Zhihu datasets, and performs comparably to Sub & Multi on the HepTh dataset. This finding is consistent with the results in BIBREF16 , where they found that simple comparison functions based on element-wise operations work very well on matching text sequences.
In terms of the aggregation functions, we compare (one-layer) CNN, mean-pooling, and max-pooling operations to accumulate the matching vectors. As shown in Figure 3 (b), max-pooling has the best empirical results on all three datasets. This may be attributed to the fact that the max-pooling operation is better at selecting important word-by-word alignment features, among all matching vectors available, to infer the relationship between vertices.
Qualitative Analysis
To visualize the learned network representations, we further employ $t$ -SNE to map the low-dimensional vectors of the vertices to a 2-D embedding space. We use the Cora dataset because there are labels associated with each vertex and WANE-ww to obtain the network embeddings.
As shown in Figure 4 where each point indicates one paper (vertex), and the color of each point indicates the category it belongs to, the embeddings of the same label are indeed very close in the 2-D plot, while those with different labels are relatively farther from each other. Note that the model is not trained with any label information, indicating that WANE-ww has extracted meaningful patterns from the text and vertex information available.
The proposed word-by-word alignment mechanism can be used to highlight the most informative words (and the corresponding matching features) wrt the relationship between vertices. We visualize the norm of matching vector obtained in ( 15 ) in Figure 5 for the Cora dataset. It can be observed that matched key words, e.g., `MCMC', `convergence', between the text sequences are indeed assigned higher values in the matching vectors. These words would be selected preferentially by the final max-pooling aggregation operation. This indicates that WANE-ww is able to abstract important word-by-word alignment features from paired text sequences.
Conclusions
We have presented a novel framework to incorporate the semantic information from vertex-associated text sequences into network embeddings. An align-aggregate framework is introduced, which first aligns a sentence pair by capturing the word-by-word matching features, and then adaptively aggregating these word-level alignment information with an efficient max-pooling function. The semantic features abstracted are further encoded, along with the structural information, into a shared space to obtain the final network embedding. Compelling experimental results on several tasks demonstrated the advantages of our approach. In future work, we aim to leverage abundant unlabeled text data to abstract more informative sentence representations BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 . Another interesting direction is to learn binary and compact network embedding, which could be more efficient in terms of both computation and memory, relative to its continuous counterpart BIBREF31 . | Unanswerable |
09a993756d2781a89f7ec5d7992f812d60e24232 | 09a993756d2781a89f7ec5d7992f812d60e24232_0 | Q: Do they report results only on English data?
Text: Introduction
Improving unsupervised learning is of key importance for advancing machine learning methods, as to unlock access to almost unlimited amounts of data to be used as training resources. The majority of recent success stories of deep learning does not fall into this category but instead relied on supervised training (in particular in the vision domain). A very notable exception comes from the text and natural language processing domain, in the form of semantic word embeddings trained unsupervised BIBREF0 , BIBREF1 , BIBREF2 . Within only a few years from their invention, such word representations – which are based on a simple matrix factorization model as we formalize below – are now routinely trained on very large amounts of raw text data, and have become ubiquitous building blocks of a majority of current state-of-the-art NLP applications.
While very useful semantic representations are available for words, it remains challenging to produce and learn such semantic embeddings for longer pieces of text, such as sentences, paragraphs or entire documents. Even more so, it remains a key goal to learn such general-purpose representations in an unsupervised way.
Currently, two contrary research trends have emerged in text representation learning: On one hand, a strong trend in deep-learning for NLP leads towards increasingly powerful and complex models, such as recurrent neural networks (RNNs), LSTMs, attention models and even Neural Turing Machine architectures. While extremely strong in expressiveness, the increased model complexity makes such models much slower to train on larger datasets. On the other end of the spectrum, simpler “shallow” models such as matrix factorizations (or bilinear models) can benefit from training on much larger sets of data, which can be a key advantage, especially in the unsupervised setting.
Surprisingly, for constructing sentence embeddings, naively using averaged word vectors was shown to outperform LSTMs (see BIBREF3 for plain averaging, and BIBREF4 for weighted averaging). This example shows potential in exploiting the trade-off between model complexity and ability to process huge amounts of text using scalable algorithms, towards the simpler side. In view of this trade-off, our work here further advances unsupervised learning of sentence embeddings. Our proposed model can be seen as an extension of the C-BOW BIBREF0 , BIBREF1 training objective to train sentence instead of word embeddings. We demonstrate that the empirical performance of our resulting general-purpose sentence embeddings very significantly exceeds the state of the art, while keeping the model simplicity as well as training and inference complexity exactly as low as in averaging methods BIBREF3 , BIBREF4 , thereby also putting the work by BIBREF4 in perspective.
Contributions. The main contributions in this work can be summarized as follows:
Model
Our model is inspired by simple matrix factor models (bilinear models) such as recently very successfully used in unsupervised learning of word embeddings BIBREF0 , BIBREF1 , BIBREF2 , BIBREF5 as well as supervised of sentence classification BIBREF6 . More precisely, these models can all be formalized as an optimization problem of the form DISPLAYFORM0
for two parameter matrices INLINEFORM0 and INLINEFORM1 , where INLINEFORM2 denotes the vocabulary. Here, the columns of the matrix INLINEFORM3 represent the learnt source word vectors whereas those of INLINEFORM4 represent the target word vectors. For a given sentence INLINEFORM5 , which can be of arbitrary length, the indicator vector INLINEFORM6 is a binary vector encoding INLINEFORM7 (bag of words encoding).
Fixed-length context windows INLINEFORM0 running over the corpus are used in word embedding methods as in C-BOW BIBREF0 , BIBREF1 and GloVe BIBREF2 . Here we have INLINEFORM1 and each cost function INLINEFORM2 only depends on a single row of its input, describing the observed target word for the given fixed-length context INLINEFORM3 . In contrast, for sentence embeddings which are the focus of our paper here, INLINEFORM4 will be entire sentences or documents (therefore variable length). This property is shared with the supervised FastText classifier BIBREF6 , which however uses soft-max with INLINEFORM5 being the number of class labels.
Proposed Unsupervised Model
We propose a new unsupervised model, Sent2Vec, for learning universal sentence embeddings. Conceptually, the model can be interpreted as a natural extension of the word-contexts from C-BOW BIBREF0 , BIBREF1 to a larger sentence context, with the sentence words being specifically optimized towards additive combination over the sentence, by means of the unsupervised objective function.
Formally, we learn a source (or context) embedding INLINEFORM0 and target embedding INLINEFORM1 for each word INLINEFORM2 in the vocabulary, with embedding dimension INLINEFORM3 and INLINEFORM4 as in ( EQREF6 ). The sentence embedding is defined as the average of the source word embeddings of its constituent words, as in ( EQREF8 ). We augment this model furthermore by also learning source embeddings for not only unigrams but also n-grams present in each sentence, and averaging the n-gram embeddings along with the words, i.e., the sentence embedding INLINEFORM5 for INLINEFORM6 is modeled as DISPLAYFORM0
where INLINEFORM0 is the list of n-grams (including unigrams) present in sentence INLINEFORM1 . In order to predict a missing word from the context, our objective models the softmax output approximated by negative sampling following BIBREF0 . For the large number of output classes INLINEFORM2 to be predicted, negative sampling is known to significantly improve training efficiency, see also BIBREF7 . Given the binary logistic loss function INLINEFORM3 coupled with negative sampling, our unsupervised training objective is formulated as follows: INLINEFORM4
where INLINEFORM0 corresponds to the current sentence and INLINEFORM1 is the set of words sampled negatively for the word INLINEFORM2 . The negatives are sampled following a multinomial distribution where each word INLINEFORM5 is associated with a probability INLINEFORM6 , where INLINEFORM7 is the normalized frequency of INLINEFORM8 in the corpus.
To select the possible target unigrams (positives), we use subsampling as in BIBREF6 , BIBREF5 , each word INLINEFORM0 being discarded with probability INLINEFORM1 where INLINEFORM2 . Where INLINEFORM3 is the subsampling hyper-parameter. Subsampling prevents very frequent words of having too much influence in the learning as they would introduce strong biases in the prediction task. With positives subsampling and respecting the negative sampling distribution, the precise training objective function becomes DISPLAYFORM0
Computational Efficiency
In contrast to more complex neural network based models, one of the core advantages of the proposed technique is the low computational cost for both inference and training. Given a sentence INLINEFORM0 and a trained model, computing the sentence representation INLINEFORM1 only requires INLINEFORM2 floating point operations (or INLINEFORM3 to be precise for the n-gram case, see ( EQREF8 )), where INLINEFORM4 is the embedding dimension. The same holds for the cost of training with SGD on the objective ( EQREF10 ), per sentence seen in the training corpus. Due to the simplicity of the model, parallel training is straight-forward using parallelized or distributed SGD.
Also, in order to store higher-order n-grams efficiently, we use the standard hashing trick, see e.g. BIBREF8 , with the same hashing function as used in FastText BIBREF6 , BIBREF5 .
Comparison to C-BOW
C-BOW BIBREF0 , BIBREF1 aims to predict a chosen target word given its fixed-size context window, the context being defined by the average of the vectors associated with the words at a distance less than the window size hyper-parameter INLINEFORM0 . If our system, when restricted to unigram features, can be seen as an extension of C-BOW where the context window includes the entire sentence, in practice there are few important differences as C-BOW uses important tricks to facilitate the learning of word embeddings. C-BOW first uses frequent word subsampling on the sentences, deciding to discard each token INLINEFORM1 with probability INLINEFORM2 or alike (small variations exist across implementations). Subsampling prevents the generation of n-grams features, and deprives the sentence of an important part of its syntactical features. It also shortens the distance between subsampled words, implicitly increasing the span of the context window. A second trick consists of using dynamic context windows: for each subsampled word INLINEFORM3 , the size of its associated context window is sampled uniformly between 1 and INLINEFORM4 . Using dynamic context windows is equivalent to weighing by the distance from the focus word INLINEFORM5 divided by the window size BIBREF9 . This makes the prediction task local, and go against our objective of creating sentence embeddings as we want to learn how to compose all n-gram features present in a sentence. In the results section, we report a significant improvement of our method over C-BOW.
Model Training
Three different datasets have been used to train our models: the Toronto book corpus, Wikipedia sentences and tweets. The Wikipedia and Toronto books sentences have been tokenized using the Stanford NLP library BIBREF10 , while for tweets we used the NLTK tweets tokenizer BIBREF11 . For training, we select a sentence randomly from the dataset and then proceed to select all the possible target unigrams using subsampling. We update the weights using SGD with a linearly decaying learning rate.
Also, to prevent overfitting, for each sentence we use dropout on its list of n-grams INLINEFORM0 , where INLINEFORM1 is the set of all unigrams contained in sentence INLINEFORM2 . After empirically trying multiple dropout schemes, we find that dropping INLINEFORM3 n-grams ( INLINEFORM4 ) for each sentence is giving superior results compared to dropping each token with some fixed probability. This dropout mechanism would negatively impact shorter sentences. The regularization can be pushed further by applying L1 regularization to the word vectors. Encouraging sparsity in the embedding vectors is particularly beneficial for high dimension INLINEFORM5 . The additional soft thresholding in every SGD step adds negligible computational cost. See also Appendix SECREF8 . We train two models on each dataset, one with unigrams only and one with unigrams and bigrams. All training parameters for the models are provided in Table TABREF25 in the supplementary material. Our C++ implementation builds upon the FastText library BIBREF6 , BIBREF5 . We will make our code and pre-trained models available open-source.
Related Work
We discuss existing models which have been proposed to construct sentence embeddings. While there is a large body of works in this direction – several among these using e.g. labelled datasets of paraphrase pairs to obtain sentence embeddings in a supervised manner BIBREF12 , BIBREF3 , BIBREF13 to learn sentence embeddings – we here focus on unsupervised, task-independent models. While some methods require ordered raw text i.e., a coherent corpus where the next sentence is a logical continuation of the previous sentence, others rely only on raw text i.e., an unordered collection of sentences. Finally, we also discuss alternative models built from structured data sources.
Unsupervised Models Independent of Sentence Ordering
The ParagraphVector DBOW model BIBREF14 is a log-linear model which is trained to learn sentence as well as word embeddings and then use a softmax distribution to predict words contained in the sentence given the sentence vector representation. They also propose a different model ParagraphVector DM where they use n-grams of consecutive words along with the sentence vector representation to predict the next word.
BIBREF15 also presented an early approach to obtain compositional embeddings from word vectors. They use different compositional techniques including static averaging or Fisher vectors of a multivariate Gaussian to obtain sentence embeddings from word2vec models.
BIBREF16 propose a Sequential (Denoising) Autoencoder, S(D)AE. This model first introduces noise in the input data: Firstly each word is deleted with probability INLINEFORM0 , then for each non-overlapping bigram, words are swapped with probability INLINEFORM1 . The model then uses an LSTM-based architecture to retrieve the original sentence from the corrupted version. The model can then be used to encode new sentences into vector representations. In the case of INLINEFORM2 , the model simply becomes a Sequential Autoencoder. BIBREF16 also propose a variant (S(D)AE + embs.) in which the words are represented by fixed pre-trained word vector embeddings.
BIBREF4 propose a model in which sentences are represented as a weighted average of fixed (pre-trained) word vectors, followed by post-processing step of subtracting the principal component. Using the generative model of BIBREF17 , words are generated conditioned on a sentence “discourse” vector INLINEFORM0 : INLINEFORM1
where INLINEFORM0 and INLINEFORM1 and INLINEFORM2 , INLINEFORM3 are scalars. INLINEFORM4 is the common discourse vector, representing a shared component among all discourses, mainly related to syntax. It allows the model to better generate syntactical features. The INLINEFORM5 term is here to enable the model to generate some frequent words even if their matching with the discourse vector INLINEFORM6 is low.
Therefore, this model tries to generate sentences as a mixture of three type of words: words matching the sentence discourse vector INLINEFORM0 , syntactical words matching INLINEFORM1 , and words with high INLINEFORM2 . BIBREF4 demonstrated that for this model, the MLE of INLINEFORM3 can be approximated by INLINEFORM4 , where INLINEFORM5 is a scalar. The sentence discourse vector can hence be obtained by subtracting INLINEFORM6 estimated by the first principal component of INLINEFORM7 's on a set of sentences. In other words, the sentence embeddings are obtained by a weighted average of the word vectors stripping away the syntax by subtracting the common discourse vector and down-weighting frequent tokens. They generate sentence embeddings from diverse pre-trained word embeddings among which are unsupervised word embeddings such as GloVe BIBREF2 as well as supervised word embeddings such as paragram-SL999 (PSL) BIBREF18 trained on the Paraphrase Database BIBREF19 .
In a very different line of work, C-PHRASE BIBREF20 relies on additional information from the syntactic parse tree of each sentence, which is incorporated into the C-BOW training objective.
BIBREF21 show that single layer CNNs can be modeled using a tensor decomposition approach. While building on an unsupervised objective, the employed dictionary learning step for obtaining phrase templates is task-specific (for each use-case), not resulting in general-purpose embeddings.
Unsupervised Models Depending on Sentence Ordering
The SkipThought model BIBREF22 combines sentence level models with recurrent neural networks. Given a sentence INLINEFORM0 from an ordered corpus, the model is trained to predict INLINEFORM1 and INLINEFORM2 .
FastSent BIBREF16 is a sentence-level log-linear bag-of-words model. Like SkipThought, it uses adjacent sentences as the prediction target and is trained in an unsupervised fashion. Using word sequences allows the model to improve over the earlier work of paragraph2vec BIBREF14 . BIBREF16 augment FastSent further by training it to predict the constituent words of the sentence as well. This model is named FastSent + AE in our comparisons.
Compared to our approach, Siamese C-BOW BIBREF23 shares the idea of learning to average word embeddings over a sentence. However, it relies on a Siamese neural network architecture to predict surrounding sentences, contrasting our simpler unsupervised objective.
Note that on the character sequence level instead of word sequences, FastText BIBREF5 uses the same conceptual model to obtain better word embeddings. This is most similar to our proposed model, with two key differences: Firstly, we predict from source word sequences to target words, as opposed to character sequences to target words, and secondly, our model is averaging the source embeddings instead of summing them.
Models requiring structured data
DictRep BIBREF24 is trained to map dictionary definitions of the words to the pre-trained word embeddings of these words. They use two different architectures, namely BOW and RNN (LSTM) with the choice of learning the input word embeddings or using them pre-trained. A similar architecture is used by the CaptionRep variant, but here the task is the mapping of given image captions to a pre-trained vector representation of these images.
Evaluation Tasks
We use a standard set of supervised as well as unsupervised benchmark tasks from the literature to evaluate our trained models, following BIBREF16 . The breadth of tasks allows to fairly measure generalization to a wide area of different domains, testing the general-purpose quality (universality) of all competing sentence embeddings. For downstream supervised evaluations, sentence embeddings are combined with logistic regression to predict target labels. In the unsupervised evaluation for sentence similarity, correlation of the cosine similarity between two embeddings is compared to human annotators.
Downstream Supervised Evaluation. Sentence embeddings are evaluated for various supervised classification tasks as follows. We evaluate paraphrase identification (MSRP) BIBREF25 , classification of movie review sentiment (MR) BIBREF26 , product reviews (CR) BIBREF27 , subjectivity classification (SUBJ) BIBREF28 , opinion polarity (MPQA) BIBREF29 and question type classification (TREC) BIBREF30 . To classify, we use the code provided by BIBREF22 in the same manner as in BIBREF16 . For the MSRP dataset, containing pairs of sentences INLINEFORM0 with associated paraphrase label, we generate feature vectors by concatenating their Sent2Vec representations INLINEFORM1 with the component-wise product INLINEFORM2 . The predefined training split is used to tune the L2 penalty parameter using cross-validation and the accuracy and F1 scores are computed on the test set. For the remaining 5 datasets, Sent2Vec embeddings are inferred from input sentences and directly fed to a logistic regression classifier. Accuracy scores are obtained using 10-fold cross-validation for the MR, CR, SUBJ and MPQA datasets. For those datasets nested cross-validation is used to tune the L2 penalty. For the TREC dataset, as for the MRSP dataset, the L2 penalty is tuned on the predefined train split using 10-fold cross-validation, and the accuracy is computed on the test set.
Unsupervised Similarity Evaluation. We perform unsupervised evaluation of the learnt sentence embeddings using the sentence cosine similarity, on the STS 2014 BIBREF31 and SICK 2014 BIBREF32 datasets. These similarity scores are compared to the gold-standard human judgements using Pearson's INLINEFORM0 BIBREF33 and Spearman's INLINEFORM1 BIBREF34 correlation scores. The SICK dataset consists of about 10,000 sentence pairs along with relatedness scores of the pairs. The STS 2014 dataset contains 3,770 pairs, divided into six different categories on the basis of the origin of sentences/phrases, namely Twitter, headlines, news, forum, WordNet and images.
Results and Discussion
In Tables TABREF18 and TABREF19 , we compare our results with those obtained by BIBREF16 on different models. Table TABREF21 in the last column shows the dramatic improvement in training time of our models (and other C-BOW-inspired models) in contrast to neural network based models. All our Sent2Vec models are trained on a machine with 2x Intel Xeon E5 INLINEFORM0 2680v3, 12 cores @2.5GHz.
Along with the models discussed in Section SECREF3 , this also includes the sentence embedding baselines obtained by simple averaging of word embeddings over the sentence, in both the C-BOW and skip-gram variants. TF-IDF BOW is a representation consisting of the counts of the 200,000 most common feature-words, weighed by their TF-IDF frequencies. To ensure coherence, we only include unsupervised models in the main paper. Performance of supervised and semi-supervised models on these evaluations can be observed in Tables TABREF29 and TABREF30 in the supplementary material.
Downstream Supervised Evaluation Results. On running supervised evaluations and observing the results in Table TABREF18 , we find that on an average our models are second only to SkipThought vectors. Also, both our models achieve state of the art results on the CR task. We also observe that on half of the supervised tasks, our unigrams + bigram model is the best model after SkipThought. Our models are weaker on the MSRP task (which consists of the identification of labelled paraphrases) compared to state-of-the-art methods. However, we observe that the models which perform very strongly on this task end up faring very poorly on the other tasks, indicating a lack of generalizability. On rest of the tasks, our models perform extremely well. The SkipThought model is able to outperform our models on most of the tasks as it is trained to predict the previous and next sentences and a lot of tasks are able to make use of this contextual information missing in our Sent2Vec models. For example, the TREC task is a poor measure of how one predicts the content of the sentence (the question) but a good measure of how the next sentence in the sequence (the answer) is predicted.
Unsupervised Similarity Evaluation Results. In Table TABREF19 , we see that our Sent2Vec models are state-of-the-art on the majority of tasks when comparing to all the unsupervised models trained on the Toronto corpus, and clearly achieve the best averaged performance. Our Sent2Vec models also on average outperform or are at par with the C-PHRASE model, despite significantly lagging behind on the STS 2014 WordNet and News subtasks. This observation can be attributed to the fact that a big chunk of the data that the C-PHRASE model is trained on comes from English Wikipedia, helping it to perform well on datasets involving definition and news items. Also, C-PHRASE uses data three times the size of the Toronto book corpus. Interestingly, our model outperforms C-PHRASE when trained on Wikipedia, as shown in Table TABREF21 , despite the fact that we use no parse tree information. Official STS 2017 benchmark. In the official results of the most recent edition of the STS 2017 benchmark BIBREF35 , our model also significantly outperforms C-PHRASE, and in fact delivers the best unsupervised baseline method.
For the Siamese C-BOW model trained on the Toronto corpus, supervised evaluation as well as similarity evaluation results on the SICK 2014 dataset are unavailable.
Macro Average. To summarize our contributions on both supervised and unsupervised tasks, in Table TABREF21 we present the results in terms of the macro average over the averages of both supervised and unsupervised tasks along with the training times of the models. For unsupervised tasks, averages are taken over both Spearman and Pearson scores. The comparison includes the best performing unsupervised and semi-supervised methods described in Section SECREF3 . For models trained on the Toronto books dataset, we report a 3.8 INLINEFORM0 points improvement over the state of the art. Considering all supervised, semi-supervised methods and all datasets compared in BIBREF16 , we report a 2.2 INLINEFORM1 points improvement.
We also see a noticeable improvement in accuracy as we use larger datasets like Twitter and Wikipedia. We furthermore see that the Sent2Vec models are faster to train when compared to methods like SkipThought and DictRep, owing to the SGD optimizer allowing a high degree of parallelizability.
We can clearly see Sent2Vec outperforming other unsupervised and even semi-supervised methods. This can be attributed to the superior generalizability of our model across supervised and unsupervised tasks.
Comparison with BIBREF4 . We also compare our work with BIBREF4 who also use additive compositionality to obtain sentence embeddings. However, in contrast to our model, they use fixed, pre-trained word embeddings to build a weighted average of these embeddings using unigram probabilities. While we couldn't find pre-trained state of the art word embeddings trained on the Toronto books corpus, we evaluated their method using GloVe embeddings obtained from the larger Common Crawl Corpus, which is 42 times larger than our twitter corpus, greatly favoring their method over ours.
In Table TABREF22 , we report an experimental comparison to their model on unsupervised tasks. In the table, the suffix W indicates that their down-weighting scheme has been used, while the suffix R indicates the removal of the first principal component. They report values of INLINEFORM0 as giving the best results and used INLINEFORM1 for all their experiments. We observe that our results are competitive with the embeddings of BIBREF4 for purely unsupervised methods. It is important to note that the scores obtained from supervised task-specific PSL embeddings trained for the purpose of semantic similarity outperform our method on both SICK and average STS 2014, which is expected as our model is trained purely unsupervised.
In order to facilitate a more detailed comparison, we also evaluated the unsupervised Glove + WR embeddings on downstream supervised tasks and compared them to our twitter models. To use BIBREF4 's method in a supervised setup, we precomputed and stored the common discourse vector INLINEFORM0 using 2 million random Wikipedia sentences. On an average, our models outperform their unsupervised models by a significant margin, this despite the fact that they used GloVe embeddings trained on larger corpora than ours (42 times larger). Our models also outperform their semi-supervised PSL + WR model. This indicates our model learns a more precise weighing scheme than the static one proposed by BIBREF4 .
The effect of datasets and n-grams. Despite being trained on three very different datasets, all of our models generalize well to sometimes very specific domains. Models trained on Toronto Corpus are the state-of-the-art on the STS 2014 images dataset even beating the supervised CaptionRep model trained on images. We also see that addition of bigrams to our models doesn't help much when it comes to unsupervised evaluations but gives a significant boost-up in accuracy on supervised tasks. We attribute this phenomenon to the ability of bigrams models to capture some non-compositional features missed by unigrams models. Having a single representation for “not good" or “very bad" can boost the supervised model's ability to infer relevant features for the corresponding classifier. For semantic similarity tasks however, the relative uniqueness of bigrams results in pushing sentence representations further apart, which can explain the average drop of scores for bigrams models on those tasks.
On learning the importance and the direction of the word vectors. Our model – by learning how to generate and compose word vectors – has to learn both the direction of the word embeddings as well as their norm. Considering the norms of the used word vectors as by our averaging over the sentence, we observe an interesting distribution of the “importance” of each word. In Figure FIGREF24 we show the profile of the INLINEFORM0 -norm as a function of INLINEFORM1 for each INLINEFORM2 , and compare it to the static down-weighting mechanism of BIBREF4 . We can observe that our model is learning to down-weight frequent tokens by itself. It is also down-weighting rare tokens and the INLINEFORM3 profile seems to roughly follow Luhn's hypothesis BIBREF36 , a well known information retrieval paradigm, stating that mid-rank terms are the most significant to discriminate content.
Conclusion
In this paper, we introduce a novel, computationally efficient, unsupervised, C-BOW-inspired method to train and infer sentence embeddings. On supervised evaluations, our method, on an average, achieves better performance than all other unsupervised competitors with the exception of SkipThought. However, SkipThought vectors show a very poor performance on sentence similarity tasks while our model is state-of-the-art for these evaluations on average. Also, our model is generalizable, extremely fast to train, simple to understand and easily interpretable, showing the relevance of simple and well-grounded representation models in contrast to the models using deep architectures. Future work could focus on augmenting the model to exploit data with ordered sentences. Furthermore, we would like to investigate the model's ability to use pre-trained embeddings for downstream transfer learning tasks.
L1 regularization of models
Optionally, our model can be additionally improved by adding an L1 regularizer term in the objective function, leading to slightly better generalization performance. Additionally, encouraging sparsity in the embedding vectors is beneficial for memory reasons, allowing higher embedding dimensions INLINEFORM0 .
We propose to apply L1 regularization individually to each word (and n-gram) vector (both source and target vectors). Formally, the training objective function ( EQREF10 ) then becomes DISPLAYFORM0
where INLINEFORM0 is the regularization parameter.
Now, in order to minimize a function of the form INLINEFORM0 where INLINEFORM1 is not differentiable over the domain, we can use the basic proximal-gradient scheme. In this iterative method, after doing a gradient descent step on INLINEFORM2 with learning rate INLINEFORM3 , we update INLINEFORM4 as DISPLAYFORM0
where INLINEFORM0 is called the proximal function BIBREF37 of INLINEFORM1 with INLINEFORM2 being the proximal parameter and INLINEFORM3 is the value of INLINEFORM4 after a gradient (or SGD) step on INLINEFORM5 .
In our case, INLINEFORM0 and the corresponding proximal operator is given by DISPLAYFORM0
where INLINEFORM0 corresponds to element-wise product.
Similar to the proximal-gradient scheme, in our case we can optionally use the thresholding operator on the updated word and n-gram vectors after an SGD step. The soft thresholding parameter used for this update is INLINEFORM0 and INLINEFORM1 for the source and target vectors respectively where INLINEFORM2 is the current learning rate, INLINEFORM3 is the INLINEFORM4 regularization parameter and INLINEFORM5 is the sentence on which SGD is being run.
We observe that INLINEFORM0 regularization using the proximal step gives our models a small boost in performance. Also, applying the thresholding operator takes only INLINEFORM1 floating point operations for the updating the word vectors corresponding to the sentence and INLINEFORM2 for updating the target as well as the negative word vectors, where INLINEFORM3 is the number of negatives sampled and INLINEFORM4 is the embedding dimension. Thus, performing INLINEFORM5 regularization using soft-thresholding operator comes with a small computational overhead.
We set INLINEFORM0 to be 0.0005 for both the Wikipedia and the Toronto Book Corpus unigrams + bigrams models. | Yes |
37eba8c3cfe23778498d95a7dfddf8dfb725f8e2 | 37eba8c3cfe23778498d95a7dfddf8dfb725f8e2_0 | Q: Which other unsupervised models are used for comparison?
Text: Introduction
Improving unsupervised learning is of key importance for advancing machine learning methods, as to unlock access to almost unlimited amounts of data to be used as training resources. The majority of recent success stories of deep learning does not fall into this category but instead relied on supervised training (in particular in the vision domain). A very notable exception comes from the text and natural language processing domain, in the form of semantic word embeddings trained unsupervised BIBREF0 , BIBREF1 , BIBREF2 . Within only a few years from their invention, such word representations – which are based on a simple matrix factorization model as we formalize below – are now routinely trained on very large amounts of raw text data, and have become ubiquitous building blocks of a majority of current state-of-the-art NLP applications.
While very useful semantic representations are available for words, it remains challenging to produce and learn such semantic embeddings for longer pieces of text, such as sentences, paragraphs or entire documents. Even more so, it remains a key goal to learn such general-purpose representations in an unsupervised way.
Currently, two contrary research trends have emerged in text representation learning: On one hand, a strong trend in deep-learning for NLP leads towards increasingly powerful and complex models, such as recurrent neural networks (RNNs), LSTMs, attention models and even Neural Turing Machine architectures. While extremely strong in expressiveness, the increased model complexity makes such models much slower to train on larger datasets. On the other end of the spectrum, simpler “shallow” models such as matrix factorizations (or bilinear models) can benefit from training on much larger sets of data, which can be a key advantage, especially in the unsupervised setting.
Surprisingly, for constructing sentence embeddings, naively using averaged word vectors was shown to outperform LSTMs (see BIBREF3 for plain averaging, and BIBREF4 for weighted averaging). This example shows potential in exploiting the trade-off between model complexity and ability to process huge amounts of text using scalable algorithms, towards the simpler side. In view of this trade-off, our work here further advances unsupervised learning of sentence embeddings. Our proposed model can be seen as an extension of the C-BOW BIBREF0 , BIBREF1 training objective to train sentence instead of word embeddings. We demonstrate that the empirical performance of our resulting general-purpose sentence embeddings very significantly exceeds the state of the art, while keeping the model simplicity as well as training and inference complexity exactly as low as in averaging methods BIBREF3 , BIBREF4 , thereby also putting the work by BIBREF4 in perspective.
Contributions. The main contributions in this work can be summarized as follows:
Model
Our model is inspired by simple matrix factor models (bilinear models) such as recently very successfully used in unsupervised learning of word embeddings BIBREF0 , BIBREF1 , BIBREF2 , BIBREF5 as well as supervised of sentence classification BIBREF6 . More precisely, these models can all be formalized as an optimization problem of the form DISPLAYFORM0
for two parameter matrices INLINEFORM0 and INLINEFORM1 , where INLINEFORM2 denotes the vocabulary. Here, the columns of the matrix INLINEFORM3 represent the learnt source word vectors whereas those of INLINEFORM4 represent the target word vectors. For a given sentence INLINEFORM5 , which can be of arbitrary length, the indicator vector INLINEFORM6 is a binary vector encoding INLINEFORM7 (bag of words encoding).
Fixed-length context windows INLINEFORM0 running over the corpus are used in word embedding methods as in C-BOW BIBREF0 , BIBREF1 and GloVe BIBREF2 . Here we have INLINEFORM1 and each cost function INLINEFORM2 only depends on a single row of its input, describing the observed target word for the given fixed-length context INLINEFORM3 . In contrast, for sentence embeddings which are the focus of our paper here, INLINEFORM4 will be entire sentences or documents (therefore variable length). This property is shared with the supervised FastText classifier BIBREF6 , which however uses soft-max with INLINEFORM5 being the number of class labels.
Proposed Unsupervised Model
We propose a new unsupervised model, Sent2Vec, for learning universal sentence embeddings. Conceptually, the model can be interpreted as a natural extension of the word-contexts from C-BOW BIBREF0 , BIBREF1 to a larger sentence context, with the sentence words being specifically optimized towards additive combination over the sentence, by means of the unsupervised objective function.
Formally, we learn a source (or context) embedding INLINEFORM0 and target embedding INLINEFORM1 for each word INLINEFORM2 in the vocabulary, with embedding dimension INLINEFORM3 and INLINEFORM4 as in ( EQREF6 ). The sentence embedding is defined as the average of the source word embeddings of its constituent words, as in ( EQREF8 ). We augment this model furthermore by also learning source embeddings for not only unigrams but also n-grams present in each sentence, and averaging the n-gram embeddings along with the words, i.e., the sentence embedding INLINEFORM5 for INLINEFORM6 is modeled as DISPLAYFORM0
where INLINEFORM0 is the list of n-grams (including unigrams) present in sentence INLINEFORM1 . In order to predict a missing word from the context, our objective models the softmax output approximated by negative sampling following BIBREF0 . For the large number of output classes INLINEFORM2 to be predicted, negative sampling is known to significantly improve training efficiency, see also BIBREF7 . Given the binary logistic loss function INLINEFORM3 coupled with negative sampling, our unsupervised training objective is formulated as follows: INLINEFORM4
where INLINEFORM0 corresponds to the current sentence and INLINEFORM1 is the set of words sampled negatively for the word INLINEFORM2 . The negatives are sampled following a multinomial distribution where each word INLINEFORM5 is associated with a probability INLINEFORM6 , where INLINEFORM7 is the normalized frequency of INLINEFORM8 in the corpus.
To select the possible target unigrams (positives), we use subsampling as in BIBREF6 , BIBREF5 , each word INLINEFORM0 being discarded with probability INLINEFORM1 where INLINEFORM2 . Where INLINEFORM3 is the subsampling hyper-parameter. Subsampling prevents very frequent words of having too much influence in the learning as they would introduce strong biases in the prediction task. With positives subsampling and respecting the negative sampling distribution, the precise training objective function becomes DISPLAYFORM0
Computational Efficiency
In contrast to more complex neural network based models, one of the core advantages of the proposed technique is the low computational cost for both inference and training. Given a sentence INLINEFORM0 and a trained model, computing the sentence representation INLINEFORM1 only requires INLINEFORM2 floating point operations (or INLINEFORM3 to be precise for the n-gram case, see ( EQREF8 )), where INLINEFORM4 is the embedding dimension. The same holds for the cost of training with SGD on the objective ( EQREF10 ), per sentence seen in the training corpus. Due to the simplicity of the model, parallel training is straight-forward using parallelized or distributed SGD.
Also, in order to store higher-order n-grams efficiently, we use the standard hashing trick, see e.g. BIBREF8 , with the same hashing function as used in FastText BIBREF6 , BIBREF5 .
Comparison to C-BOW
C-BOW BIBREF0 , BIBREF1 aims to predict a chosen target word given its fixed-size context window, the context being defined by the average of the vectors associated with the words at a distance less than the window size hyper-parameter INLINEFORM0 . If our system, when restricted to unigram features, can be seen as an extension of C-BOW where the context window includes the entire sentence, in practice there are few important differences as C-BOW uses important tricks to facilitate the learning of word embeddings. C-BOW first uses frequent word subsampling on the sentences, deciding to discard each token INLINEFORM1 with probability INLINEFORM2 or alike (small variations exist across implementations). Subsampling prevents the generation of n-grams features, and deprives the sentence of an important part of its syntactical features. It also shortens the distance between subsampled words, implicitly increasing the span of the context window. A second trick consists of using dynamic context windows: for each subsampled word INLINEFORM3 , the size of its associated context window is sampled uniformly between 1 and INLINEFORM4 . Using dynamic context windows is equivalent to weighing by the distance from the focus word INLINEFORM5 divided by the window size BIBREF9 . This makes the prediction task local, and go against our objective of creating sentence embeddings as we want to learn how to compose all n-gram features present in a sentence. In the results section, we report a significant improvement of our method over C-BOW.
Model Training
Three different datasets have been used to train our models: the Toronto book corpus, Wikipedia sentences and tweets. The Wikipedia and Toronto books sentences have been tokenized using the Stanford NLP library BIBREF10 , while for tweets we used the NLTK tweets tokenizer BIBREF11 . For training, we select a sentence randomly from the dataset and then proceed to select all the possible target unigrams using subsampling. We update the weights using SGD with a linearly decaying learning rate.
Also, to prevent overfitting, for each sentence we use dropout on its list of n-grams INLINEFORM0 , where INLINEFORM1 is the set of all unigrams contained in sentence INLINEFORM2 . After empirically trying multiple dropout schemes, we find that dropping INLINEFORM3 n-grams ( INLINEFORM4 ) for each sentence is giving superior results compared to dropping each token with some fixed probability. This dropout mechanism would negatively impact shorter sentences. The regularization can be pushed further by applying L1 regularization to the word vectors. Encouraging sparsity in the embedding vectors is particularly beneficial for high dimension INLINEFORM5 . The additional soft thresholding in every SGD step adds negligible computational cost. See also Appendix SECREF8 . We train two models on each dataset, one with unigrams only and one with unigrams and bigrams. All training parameters for the models are provided in Table TABREF25 in the supplementary material. Our C++ implementation builds upon the FastText library BIBREF6 , BIBREF5 . We will make our code and pre-trained models available open-source.
Related Work
We discuss existing models which have been proposed to construct sentence embeddings. While there is a large body of works in this direction – several among these using e.g. labelled datasets of paraphrase pairs to obtain sentence embeddings in a supervised manner BIBREF12 , BIBREF3 , BIBREF13 to learn sentence embeddings – we here focus on unsupervised, task-independent models. While some methods require ordered raw text i.e., a coherent corpus where the next sentence is a logical continuation of the previous sentence, others rely only on raw text i.e., an unordered collection of sentences. Finally, we also discuss alternative models built from structured data sources.
Unsupervised Models Independent of Sentence Ordering
The ParagraphVector DBOW model BIBREF14 is a log-linear model which is trained to learn sentence as well as word embeddings and then use a softmax distribution to predict words contained in the sentence given the sentence vector representation. They also propose a different model ParagraphVector DM where they use n-grams of consecutive words along with the sentence vector representation to predict the next word.
BIBREF15 also presented an early approach to obtain compositional embeddings from word vectors. They use different compositional techniques including static averaging or Fisher vectors of a multivariate Gaussian to obtain sentence embeddings from word2vec models.
BIBREF16 propose a Sequential (Denoising) Autoencoder, S(D)AE. This model first introduces noise in the input data: Firstly each word is deleted with probability INLINEFORM0 , then for each non-overlapping bigram, words are swapped with probability INLINEFORM1 . The model then uses an LSTM-based architecture to retrieve the original sentence from the corrupted version. The model can then be used to encode new sentences into vector representations. In the case of INLINEFORM2 , the model simply becomes a Sequential Autoencoder. BIBREF16 also propose a variant (S(D)AE + embs.) in which the words are represented by fixed pre-trained word vector embeddings.
BIBREF4 propose a model in which sentences are represented as a weighted average of fixed (pre-trained) word vectors, followed by post-processing step of subtracting the principal component. Using the generative model of BIBREF17 , words are generated conditioned on a sentence “discourse” vector INLINEFORM0 : INLINEFORM1
where INLINEFORM0 and INLINEFORM1 and INLINEFORM2 , INLINEFORM3 are scalars. INLINEFORM4 is the common discourse vector, representing a shared component among all discourses, mainly related to syntax. It allows the model to better generate syntactical features. The INLINEFORM5 term is here to enable the model to generate some frequent words even if their matching with the discourse vector INLINEFORM6 is low.
Therefore, this model tries to generate sentences as a mixture of three type of words: words matching the sentence discourse vector INLINEFORM0 , syntactical words matching INLINEFORM1 , and words with high INLINEFORM2 . BIBREF4 demonstrated that for this model, the MLE of INLINEFORM3 can be approximated by INLINEFORM4 , where INLINEFORM5 is a scalar. The sentence discourse vector can hence be obtained by subtracting INLINEFORM6 estimated by the first principal component of INLINEFORM7 's on a set of sentences. In other words, the sentence embeddings are obtained by a weighted average of the word vectors stripping away the syntax by subtracting the common discourse vector and down-weighting frequent tokens. They generate sentence embeddings from diverse pre-trained word embeddings among which are unsupervised word embeddings such as GloVe BIBREF2 as well as supervised word embeddings such as paragram-SL999 (PSL) BIBREF18 trained on the Paraphrase Database BIBREF19 .
In a very different line of work, C-PHRASE BIBREF20 relies on additional information from the syntactic parse tree of each sentence, which is incorporated into the C-BOW training objective.
BIBREF21 show that single layer CNNs can be modeled using a tensor decomposition approach. While building on an unsupervised objective, the employed dictionary learning step for obtaining phrase templates is task-specific (for each use-case), not resulting in general-purpose embeddings.
Unsupervised Models Depending on Sentence Ordering
The SkipThought model BIBREF22 combines sentence level models with recurrent neural networks. Given a sentence INLINEFORM0 from an ordered corpus, the model is trained to predict INLINEFORM1 and INLINEFORM2 .
FastSent BIBREF16 is a sentence-level log-linear bag-of-words model. Like SkipThought, it uses adjacent sentences as the prediction target and is trained in an unsupervised fashion. Using word sequences allows the model to improve over the earlier work of paragraph2vec BIBREF14 . BIBREF16 augment FastSent further by training it to predict the constituent words of the sentence as well. This model is named FastSent + AE in our comparisons.
Compared to our approach, Siamese C-BOW BIBREF23 shares the idea of learning to average word embeddings over a sentence. However, it relies on a Siamese neural network architecture to predict surrounding sentences, contrasting our simpler unsupervised objective.
Note that on the character sequence level instead of word sequences, FastText BIBREF5 uses the same conceptual model to obtain better word embeddings. This is most similar to our proposed model, with two key differences: Firstly, we predict from source word sequences to target words, as opposed to character sequences to target words, and secondly, our model is averaging the source embeddings instead of summing them.
Models requiring structured data
DictRep BIBREF24 is trained to map dictionary definitions of the words to the pre-trained word embeddings of these words. They use two different architectures, namely BOW and RNN (LSTM) with the choice of learning the input word embeddings or using them pre-trained. A similar architecture is used by the CaptionRep variant, but here the task is the mapping of given image captions to a pre-trained vector representation of these images.
Evaluation Tasks
We use a standard set of supervised as well as unsupervised benchmark tasks from the literature to evaluate our trained models, following BIBREF16 . The breadth of tasks allows to fairly measure generalization to a wide area of different domains, testing the general-purpose quality (universality) of all competing sentence embeddings. For downstream supervised evaluations, sentence embeddings are combined with logistic regression to predict target labels. In the unsupervised evaluation for sentence similarity, correlation of the cosine similarity between two embeddings is compared to human annotators.
Downstream Supervised Evaluation. Sentence embeddings are evaluated for various supervised classification tasks as follows. We evaluate paraphrase identification (MSRP) BIBREF25 , classification of movie review sentiment (MR) BIBREF26 , product reviews (CR) BIBREF27 , subjectivity classification (SUBJ) BIBREF28 , opinion polarity (MPQA) BIBREF29 and question type classification (TREC) BIBREF30 . To classify, we use the code provided by BIBREF22 in the same manner as in BIBREF16 . For the MSRP dataset, containing pairs of sentences INLINEFORM0 with associated paraphrase label, we generate feature vectors by concatenating their Sent2Vec representations INLINEFORM1 with the component-wise product INLINEFORM2 . The predefined training split is used to tune the L2 penalty parameter using cross-validation and the accuracy and F1 scores are computed on the test set. For the remaining 5 datasets, Sent2Vec embeddings are inferred from input sentences and directly fed to a logistic regression classifier. Accuracy scores are obtained using 10-fold cross-validation for the MR, CR, SUBJ and MPQA datasets. For those datasets nested cross-validation is used to tune the L2 penalty. For the TREC dataset, as for the MRSP dataset, the L2 penalty is tuned on the predefined train split using 10-fold cross-validation, and the accuracy is computed on the test set.
Unsupervised Similarity Evaluation. We perform unsupervised evaluation of the learnt sentence embeddings using the sentence cosine similarity, on the STS 2014 BIBREF31 and SICK 2014 BIBREF32 datasets. These similarity scores are compared to the gold-standard human judgements using Pearson's INLINEFORM0 BIBREF33 and Spearman's INLINEFORM1 BIBREF34 correlation scores. The SICK dataset consists of about 10,000 sentence pairs along with relatedness scores of the pairs. The STS 2014 dataset contains 3,770 pairs, divided into six different categories on the basis of the origin of sentences/phrases, namely Twitter, headlines, news, forum, WordNet and images.
Results and Discussion
In Tables TABREF18 and TABREF19 , we compare our results with those obtained by BIBREF16 on different models. Table TABREF21 in the last column shows the dramatic improvement in training time of our models (and other C-BOW-inspired models) in contrast to neural network based models. All our Sent2Vec models are trained on a machine with 2x Intel Xeon E5 INLINEFORM0 2680v3, 12 cores @2.5GHz.
Along with the models discussed in Section SECREF3 , this also includes the sentence embedding baselines obtained by simple averaging of word embeddings over the sentence, in both the C-BOW and skip-gram variants. TF-IDF BOW is a representation consisting of the counts of the 200,000 most common feature-words, weighed by their TF-IDF frequencies. To ensure coherence, we only include unsupervised models in the main paper. Performance of supervised and semi-supervised models on these evaluations can be observed in Tables TABREF29 and TABREF30 in the supplementary material.
Downstream Supervised Evaluation Results. On running supervised evaluations and observing the results in Table TABREF18 , we find that on an average our models are second only to SkipThought vectors. Also, both our models achieve state of the art results on the CR task. We also observe that on half of the supervised tasks, our unigrams + bigram model is the best model after SkipThought. Our models are weaker on the MSRP task (which consists of the identification of labelled paraphrases) compared to state-of-the-art methods. However, we observe that the models which perform very strongly on this task end up faring very poorly on the other tasks, indicating a lack of generalizability. On rest of the tasks, our models perform extremely well. The SkipThought model is able to outperform our models on most of the tasks as it is trained to predict the previous and next sentences and a lot of tasks are able to make use of this contextual information missing in our Sent2Vec models. For example, the TREC task is a poor measure of how one predicts the content of the sentence (the question) but a good measure of how the next sentence in the sequence (the answer) is predicted.
Unsupervised Similarity Evaluation Results. In Table TABREF19 , we see that our Sent2Vec models are state-of-the-art on the majority of tasks when comparing to all the unsupervised models trained on the Toronto corpus, and clearly achieve the best averaged performance. Our Sent2Vec models also on average outperform or are at par with the C-PHRASE model, despite significantly lagging behind on the STS 2014 WordNet and News subtasks. This observation can be attributed to the fact that a big chunk of the data that the C-PHRASE model is trained on comes from English Wikipedia, helping it to perform well on datasets involving definition and news items. Also, C-PHRASE uses data three times the size of the Toronto book corpus. Interestingly, our model outperforms C-PHRASE when trained on Wikipedia, as shown in Table TABREF21 , despite the fact that we use no parse tree information. Official STS 2017 benchmark. In the official results of the most recent edition of the STS 2017 benchmark BIBREF35 , our model also significantly outperforms C-PHRASE, and in fact delivers the best unsupervised baseline method.
For the Siamese C-BOW model trained on the Toronto corpus, supervised evaluation as well as similarity evaluation results on the SICK 2014 dataset are unavailable.
Macro Average. To summarize our contributions on both supervised and unsupervised tasks, in Table TABREF21 we present the results in terms of the macro average over the averages of both supervised and unsupervised tasks along with the training times of the models. For unsupervised tasks, averages are taken over both Spearman and Pearson scores. The comparison includes the best performing unsupervised and semi-supervised methods described in Section SECREF3 . For models trained on the Toronto books dataset, we report a 3.8 INLINEFORM0 points improvement over the state of the art. Considering all supervised, semi-supervised methods and all datasets compared in BIBREF16 , we report a 2.2 INLINEFORM1 points improvement.
We also see a noticeable improvement in accuracy as we use larger datasets like Twitter and Wikipedia. We furthermore see that the Sent2Vec models are faster to train when compared to methods like SkipThought and DictRep, owing to the SGD optimizer allowing a high degree of parallelizability.
We can clearly see Sent2Vec outperforming other unsupervised and even semi-supervised methods. This can be attributed to the superior generalizability of our model across supervised and unsupervised tasks.
Comparison with BIBREF4 . We also compare our work with BIBREF4 who also use additive compositionality to obtain sentence embeddings. However, in contrast to our model, they use fixed, pre-trained word embeddings to build a weighted average of these embeddings using unigram probabilities. While we couldn't find pre-trained state of the art word embeddings trained on the Toronto books corpus, we evaluated their method using GloVe embeddings obtained from the larger Common Crawl Corpus, which is 42 times larger than our twitter corpus, greatly favoring their method over ours.
In Table TABREF22 , we report an experimental comparison to their model on unsupervised tasks. In the table, the suffix W indicates that their down-weighting scheme has been used, while the suffix R indicates the removal of the first principal component. They report values of INLINEFORM0 as giving the best results and used INLINEFORM1 for all their experiments. We observe that our results are competitive with the embeddings of BIBREF4 for purely unsupervised methods. It is important to note that the scores obtained from supervised task-specific PSL embeddings trained for the purpose of semantic similarity outperform our method on both SICK and average STS 2014, which is expected as our model is trained purely unsupervised.
In order to facilitate a more detailed comparison, we also evaluated the unsupervised Glove + WR embeddings on downstream supervised tasks and compared them to our twitter models. To use BIBREF4 's method in a supervised setup, we precomputed and stored the common discourse vector INLINEFORM0 using 2 million random Wikipedia sentences. On an average, our models outperform their unsupervised models by a significant margin, this despite the fact that they used GloVe embeddings trained on larger corpora than ours (42 times larger). Our models also outperform their semi-supervised PSL + WR model. This indicates our model learns a more precise weighing scheme than the static one proposed by BIBREF4 .
The effect of datasets and n-grams. Despite being trained on three very different datasets, all of our models generalize well to sometimes very specific domains. Models trained on Toronto Corpus are the state-of-the-art on the STS 2014 images dataset even beating the supervised CaptionRep model trained on images. We also see that addition of bigrams to our models doesn't help much when it comes to unsupervised evaluations but gives a significant boost-up in accuracy on supervised tasks. We attribute this phenomenon to the ability of bigrams models to capture some non-compositional features missed by unigrams models. Having a single representation for “not good" or “very bad" can boost the supervised model's ability to infer relevant features for the corresponding classifier. For semantic similarity tasks however, the relative uniqueness of bigrams results in pushing sentence representations further apart, which can explain the average drop of scores for bigrams models on those tasks.
On learning the importance and the direction of the word vectors. Our model – by learning how to generate and compose word vectors – has to learn both the direction of the word embeddings as well as their norm. Considering the norms of the used word vectors as by our averaging over the sentence, we observe an interesting distribution of the “importance” of each word. In Figure FIGREF24 we show the profile of the INLINEFORM0 -norm as a function of INLINEFORM1 for each INLINEFORM2 , and compare it to the static down-weighting mechanism of BIBREF4 . We can observe that our model is learning to down-weight frequent tokens by itself. It is also down-weighting rare tokens and the INLINEFORM3 profile seems to roughly follow Luhn's hypothesis BIBREF36 , a well known information retrieval paradigm, stating that mid-rank terms are the most significant to discriminate content.
Conclusion
In this paper, we introduce a novel, computationally efficient, unsupervised, C-BOW-inspired method to train and infer sentence embeddings. On supervised evaluations, our method, on an average, achieves better performance than all other unsupervised competitors with the exception of SkipThought. However, SkipThought vectors show a very poor performance on sentence similarity tasks while our model is state-of-the-art for these evaluations on average. Also, our model is generalizable, extremely fast to train, simple to understand and easily interpretable, showing the relevance of simple and well-grounded representation models in contrast to the models using deep architectures. Future work could focus on augmenting the model to exploit data with ordered sentences. Furthermore, we would like to investigate the model's ability to use pre-trained embeddings for downstream transfer learning tasks.
L1 regularization of models
Optionally, our model can be additionally improved by adding an L1 regularizer term in the objective function, leading to slightly better generalization performance. Additionally, encouraging sparsity in the embedding vectors is beneficial for memory reasons, allowing higher embedding dimensions INLINEFORM0 .
We propose to apply L1 regularization individually to each word (and n-gram) vector (both source and target vectors). Formally, the training objective function ( EQREF10 ) then becomes DISPLAYFORM0
where INLINEFORM0 is the regularization parameter.
Now, in order to minimize a function of the form INLINEFORM0 where INLINEFORM1 is not differentiable over the domain, we can use the basic proximal-gradient scheme. In this iterative method, after doing a gradient descent step on INLINEFORM2 with learning rate INLINEFORM3 , we update INLINEFORM4 as DISPLAYFORM0
where INLINEFORM0 is called the proximal function BIBREF37 of INLINEFORM1 with INLINEFORM2 being the proximal parameter and INLINEFORM3 is the value of INLINEFORM4 after a gradient (or SGD) step on INLINEFORM5 .
In our case, INLINEFORM0 and the corresponding proximal operator is given by DISPLAYFORM0
where INLINEFORM0 corresponds to element-wise product.
Similar to the proximal-gradient scheme, in our case we can optionally use the thresholding operator on the updated word and n-gram vectors after an SGD step. The soft thresholding parameter used for this update is INLINEFORM0 and INLINEFORM1 for the source and target vectors respectively where INLINEFORM2 is the current learning rate, INLINEFORM3 is the INLINEFORM4 regularization parameter and INLINEFORM5 is the sentence on which SGD is being run.
We observe that INLINEFORM0 regularization using the proximal step gives our models a small boost in performance. Also, applying the thresholding operator takes only INLINEFORM1 floating point operations for the updating the word vectors corresponding to the sentence and INLINEFORM2 for updating the target as well as the negative word vectors, where INLINEFORM3 is the number of negatives sampled and INLINEFORM4 is the embedding dimension. Thus, performing INLINEFORM5 regularization using soft-thresholding operator comes with a small computational overhead.
We set INLINEFORM0 to be 0.0005 for both the Wikipedia and the Toronto Book Corpus unigrams + bigrams models. | Sequential (Denoising) Autoencoder, TF-IDF BOW, SkipThought, FastSent, Siamese C-BOW, C-BOW, C-PHRASE, ParagraphVector |
cdf1bf4b202576c39e063921f6b63dc9e4d6b1ff | cdf1bf4b202576c39e063921f6b63dc9e4d6b1ff_0 | Q: What metric is used to measure performance?
Text: Introduction
Improving unsupervised learning is of key importance for advancing machine learning methods, as to unlock access to almost unlimited amounts of data to be used as training resources. The majority of recent success stories of deep learning does not fall into this category but instead relied on supervised training (in particular in the vision domain). A very notable exception comes from the text and natural language processing domain, in the form of semantic word embeddings trained unsupervised BIBREF0 , BIBREF1 , BIBREF2 . Within only a few years from their invention, such word representations – which are based on a simple matrix factorization model as we formalize below – are now routinely trained on very large amounts of raw text data, and have become ubiquitous building blocks of a majority of current state-of-the-art NLP applications.
While very useful semantic representations are available for words, it remains challenging to produce and learn such semantic embeddings for longer pieces of text, such as sentences, paragraphs or entire documents. Even more so, it remains a key goal to learn such general-purpose representations in an unsupervised way.
Currently, two contrary research trends have emerged in text representation learning: On one hand, a strong trend in deep-learning for NLP leads towards increasingly powerful and complex models, such as recurrent neural networks (RNNs), LSTMs, attention models and even Neural Turing Machine architectures. While extremely strong in expressiveness, the increased model complexity makes such models much slower to train on larger datasets. On the other end of the spectrum, simpler “shallow” models such as matrix factorizations (or bilinear models) can benefit from training on much larger sets of data, which can be a key advantage, especially in the unsupervised setting.
Surprisingly, for constructing sentence embeddings, naively using averaged word vectors was shown to outperform LSTMs (see BIBREF3 for plain averaging, and BIBREF4 for weighted averaging). This example shows potential in exploiting the trade-off between model complexity and ability to process huge amounts of text using scalable algorithms, towards the simpler side. In view of this trade-off, our work here further advances unsupervised learning of sentence embeddings. Our proposed model can be seen as an extension of the C-BOW BIBREF0 , BIBREF1 training objective to train sentence instead of word embeddings. We demonstrate that the empirical performance of our resulting general-purpose sentence embeddings very significantly exceeds the state of the art, while keeping the model simplicity as well as training and inference complexity exactly as low as in averaging methods BIBREF3 , BIBREF4 , thereby also putting the work by BIBREF4 in perspective.
Contributions. The main contributions in this work can be summarized as follows:
Model
Our model is inspired by simple matrix factor models (bilinear models) such as recently very successfully used in unsupervised learning of word embeddings BIBREF0 , BIBREF1 , BIBREF2 , BIBREF5 as well as supervised of sentence classification BIBREF6 . More precisely, these models can all be formalized as an optimization problem of the form DISPLAYFORM0
for two parameter matrices INLINEFORM0 and INLINEFORM1 , where INLINEFORM2 denotes the vocabulary. Here, the columns of the matrix INLINEFORM3 represent the learnt source word vectors whereas those of INLINEFORM4 represent the target word vectors. For a given sentence INLINEFORM5 , which can be of arbitrary length, the indicator vector INLINEFORM6 is a binary vector encoding INLINEFORM7 (bag of words encoding).
Fixed-length context windows INLINEFORM0 running over the corpus are used in word embedding methods as in C-BOW BIBREF0 , BIBREF1 and GloVe BIBREF2 . Here we have INLINEFORM1 and each cost function INLINEFORM2 only depends on a single row of its input, describing the observed target word for the given fixed-length context INLINEFORM3 . In contrast, for sentence embeddings which are the focus of our paper here, INLINEFORM4 will be entire sentences or documents (therefore variable length). This property is shared with the supervised FastText classifier BIBREF6 , which however uses soft-max with INLINEFORM5 being the number of class labels.
Proposed Unsupervised Model
We propose a new unsupervised model, Sent2Vec, for learning universal sentence embeddings. Conceptually, the model can be interpreted as a natural extension of the word-contexts from C-BOW BIBREF0 , BIBREF1 to a larger sentence context, with the sentence words being specifically optimized towards additive combination over the sentence, by means of the unsupervised objective function.
Formally, we learn a source (or context) embedding INLINEFORM0 and target embedding INLINEFORM1 for each word INLINEFORM2 in the vocabulary, with embedding dimension INLINEFORM3 and INLINEFORM4 as in ( EQREF6 ). The sentence embedding is defined as the average of the source word embeddings of its constituent words, as in ( EQREF8 ). We augment this model furthermore by also learning source embeddings for not only unigrams but also n-grams present in each sentence, and averaging the n-gram embeddings along with the words, i.e., the sentence embedding INLINEFORM5 for INLINEFORM6 is modeled as DISPLAYFORM0
where INLINEFORM0 is the list of n-grams (including unigrams) present in sentence INLINEFORM1 . In order to predict a missing word from the context, our objective models the softmax output approximated by negative sampling following BIBREF0 . For the large number of output classes INLINEFORM2 to be predicted, negative sampling is known to significantly improve training efficiency, see also BIBREF7 . Given the binary logistic loss function INLINEFORM3 coupled with negative sampling, our unsupervised training objective is formulated as follows: INLINEFORM4
where INLINEFORM0 corresponds to the current sentence and INLINEFORM1 is the set of words sampled negatively for the word INLINEFORM2 . The negatives are sampled following a multinomial distribution where each word INLINEFORM5 is associated with a probability INLINEFORM6 , where INLINEFORM7 is the normalized frequency of INLINEFORM8 in the corpus.
To select the possible target unigrams (positives), we use subsampling as in BIBREF6 , BIBREF5 , each word INLINEFORM0 being discarded with probability INLINEFORM1 where INLINEFORM2 . Where INLINEFORM3 is the subsampling hyper-parameter. Subsampling prevents very frequent words of having too much influence in the learning as they would introduce strong biases in the prediction task. With positives subsampling and respecting the negative sampling distribution, the precise training objective function becomes DISPLAYFORM0
Computational Efficiency
In contrast to more complex neural network based models, one of the core advantages of the proposed technique is the low computational cost for both inference and training. Given a sentence INLINEFORM0 and a trained model, computing the sentence representation INLINEFORM1 only requires INLINEFORM2 floating point operations (or INLINEFORM3 to be precise for the n-gram case, see ( EQREF8 )), where INLINEFORM4 is the embedding dimension. The same holds for the cost of training with SGD on the objective ( EQREF10 ), per sentence seen in the training corpus. Due to the simplicity of the model, parallel training is straight-forward using parallelized or distributed SGD.
Also, in order to store higher-order n-grams efficiently, we use the standard hashing trick, see e.g. BIBREF8 , with the same hashing function as used in FastText BIBREF6 , BIBREF5 .
Comparison to C-BOW
C-BOW BIBREF0 , BIBREF1 aims to predict a chosen target word given its fixed-size context window, the context being defined by the average of the vectors associated with the words at a distance less than the window size hyper-parameter INLINEFORM0 . If our system, when restricted to unigram features, can be seen as an extension of C-BOW where the context window includes the entire sentence, in practice there are few important differences as C-BOW uses important tricks to facilitate the learning of word embeddings. C-BOW first uses frequent word subsampling on the sentences, deciding to discard each token INLINEFORM1 with probability INLINEFORM2 or alike (small variations exist across implementations). Subsampling prevents the generation of n-grams features, and deprives the sentence of an important part of its syntactical features. It also shortens the distance between subsampled words, implicitly increasing the span of the context window. A second trick consists of using dynamic context windows: for each subsampled word INLINEFORM3 , the size of its associated context window is sampled uniformly between 1 and INLINEFORM4 . Using dynamic context windows is equivalent to weighing by the distance from the focus word INLINEFORM5 divided by the window size BIBREF9 . This makes the prediction task local, and go against our objective of creating sentence embeddings as we want to learn how to compose all n-gram features present in a sentence. In the results section, we report a significant improvement of our method over C-BOW.
Model Training
Three different datasets have been used to train our models: the Toronto book corpus, Wikipedia sentences and tweets. The Wikipedia and Toronto books sentences have been tokenized using the Stanford NLP library BIBREF10 , while for tweets we used the NLTK tweets tokenizer BIBREF11 . For training, we select a sentence randomly from the dataset and then proceed to select all the possible target unigrams using subsampling. We update the weights using SGD with a linearly decaying learning rate.
Also, to prevent overfitting, for each sentence we use dropout on its list of n-grams INLINEFORM0 , where INLINEFORM1 is the set of all unigrams contained in sentence INLINEFORM2 . After empirically trying multiple dropout schemes, we find that dropping INLINEFORM3 n-grams ( INLINEFORM4 ) for each sentence is giving superior results compared to dropping each token with some fixed probability. This dropout mechanism would negatively impact shorter sentences. The regularization can be pushed further by applying L1 regularization to the word vectors. Encouraging sparsity in the embedding vectors is particularly beneficial for high dimension INLINEFORM5 . The additional soft thresholding in every SGD step adds negligible computational cost. See also Appendix SECREF8 . We train two models on each dataset, one with unigrams only and one with unigrams and bigrams. All training parameters for the models are provided in Table TABREF25 in the supplementary material. Our C++ implementation builds upon the FastText library BIBREF6 , BIBREF5 . We will make our code and pre-trained models available open-source.
Related Work
We discuss existing models which have been proposed to construct sentence embeddings. While there is a large body of works in this direction – several among these using e.g. labelled datasets of paraphrase pairs to obtain sentence embeddings in a supervised manner BIBREF12 , BIBREF3 , BIBREF13 to learn sentence embeddings – we here focus on unsupervised, task-independent models. While some methods require ordered raw text i.e., a coherent corpus where the next sentence is a logical continuation of the previous sentence, others rely only on raw text i.e., an unordered collection of sentences. Finally, we also discuss alternative models built from structured data sources.
Unsupervised Models Independent of Sentence Ordering
The ParagraphVector DBOW model BIBREF14 is a log-linear model which is trained to learn sentence as well as word embeddings and then use a softmax distribution to predict words contained in the sentence given the sentence vector representation. They also propose a different model ParagraphVector DM where they use n-grams of consecutive words along with the sentence vector representation to predict the next word.
BIBREF15 also presented an early approach to obtain compositional embeddings from word vectors. They use different compositional techniques including static averaging or Fisher vectors of a multivariate Gaussian to obtain sentence embeddings from word2vec models.
BIBREF16 propose a Sequential (Denoising) Autoencoder, S(D)AE. This model first introduces noise in the input data: Firstly each word is deleted with probability INLINEFORM0 , then for each non-overlapping bigram, words are swapped with probability INLINEFORM1 . The model then uses an LSTM-based architecture to retrieve the original sentence from the corrupted version. The model can then be used to encode new sentences into vector representations. In the case of INLINEFORM2 , the model simply becomes a Sequential Autoencoder. BIBREF16 also propose a variant (S(D)AE + embs.) in which the words are represented by fixed pre-trained word vector embeddings.
BIBREF4 propose a model in which sentences are represented as a weighted average of fixed (pre-trained) word vectors, followed by post-processing step of subtracting the principal component. Using the generative model of BIBREF17 , words are generated conditioned on a sentence “discourse” vector INLINEFORM0 : INLINEFORM1
where INLINEFORM0 and INLINEFORM1 and INLINEFORM2 , INLINEFORM3 are scalars. INLINEFORM4 is the common discourse vector, representing a shared component among all discourses, mainly related to syntax. It allows the model to better generate syntactical features. The INLINEFORM5 term is here to enable the model to generate some frequent words even if their matching with the discourse vector INLINEFORM6 is low.
Therefore, this model tries to generate sentences as a mixture of three type of words: words matching the sentence discourse vector INLINEFORM0 , syntactical words matching INLINEFORM1 , and words with high INLINEFORM2 . BIBREF4 demonstrated that for this model, the MLE of INLINEFORM3 can be approximated by INLINEFORM4 , where INLINEFORM5 is a scalar. The sentence discourse vector can hence be obtained by subtracting INLINEFORM6 estimated by the first principal component of INLINEFORM7 's on a set of sentences. In other words, the sentence embeddings are obtained by a weighted average of the word vectors stripping away the syntax by subtracting the common discourse vector and down-weighting frequent tokens. They generate sentence embeddings from diverse pre-trained word embeddings among which are unsupervised word embeddings such as GloVe BIBREF2 as well as supervised word embeddings such as paragram-SL999 (PSL) BIBREF18 trained on the Paraphrase Database BIBREF19 .
In a very different line of work, C-PHRASE BIBREF20 relies on additional information from the syntactic parse tree of each sentence, which is incorporated into the C-BOW training objective.
BIBREF21 show that single layer CNNs can be modeled using a tensor decomposition approach. While building on an unsupervised objective, the employed dictionary learning step for obtaining phrase templates is task-specific (for each use-case), not resulting in general-purpose embeddings.
Unsupervised Models Depending on Sentence Ordering
The SkipThought model BIBREF22 combines sentence level models with recurrent neural networks. Given a sentence INLINEFORM0 from an ordered corpus, the model is trained to predict INLINEFORM1 and INLINEFORM2 .
FastSent BIBREF16 is a sentence-level log-linear bag-of-words model. Like SkipThought, it uses adjacent sentences as the prediction target and is trained in an unsupervised fashion. Using word sequences allows the model to improve over the earlier work of paragraph2vec BIBREF14 . BIBREF16 augment FastSent further by training it to predict the constituent words of the sentence as well. This model is named FastSent + AE in our comparisons.
Compared to our approach, Siamese C-BOW BIBREF23 shares the idea of learning to average word embeddings over a sentence. However, it relies on a Siamese neural network architecture to predict surrounding sentences, contrasting our simpler unsupervised objective.
Note that on the character sequence level instead of word sequences, FastText BIBREF5 uses the same conceptual model to obtain better word embeddings. This is most similar to our proposed model, with two key differences: Firstly, we predict from source word sequences to target words, as opposed to character sequences to target words, and secondly, our model is averaging the source embeddings instead of summing them.
Models requiring structured data
DictRep BIBREF24 is trained to map dictionary definitions of the words to the pre-trained word embeddings of these words. They use two different architectures, namely BOW and RNN (LSTM) with the choice of learning the input word embeddings or using them pre-trained. A similar architecture is used by the CaptionRep variant, but here the task is the mapping of given image captions to a pre-trained vector representation of these images.
Evaluation Tasks
We use a standard set of supervised as well as unsupervised benchmark tasks from the literature to evaluate our trained models, following BIBREF16 . The breadth of tasks allows to fairly measure generalization to a wide area of different domains, testing the general-purpose quality (universality) of all competing sentence embeddings. For downstream supervised evaluations, sentence embeddings are combined with logistic regression to predict target labels. In the unsupervised evaluation for sentence similarity, correlation of the cosine similarity between two embeddings is compared to human annotators.
Downstream Supervised Evaluation. Sentence embeddings are evaluated for various supervised classification tasks as follows. We evaluate paraphrase identification (MSRP) BIBREF25 , classification of movie review sentiment (MR) BIBREF26 , product reviews (CR) BIBREF27 , subjectivity classification (SUBJ) BIBREF28 , opinion polarity (MPQA) BIBREF29 and question type classification (TREC) BIBREF30 . To classify, we use the code provided by BIBREF22 in the same manner as in BIBREF16 . For the MSRP dataset, containing pairs of sentences INLINEFORM0 with associated paraphrase label, we generate feature vectors by concatenating their Sent2Vec representations INLINEFORM1 with the component-wise product INLINEFORM2 . The predefined training split is used to tune the L2 penalty parameter using cross-validation and the accuracy and F1 scores are computed on the test set. For the remaining 5 datasets, Sent2Vec embeddings are inferred from input sentences and directly fed to a logistic regression classifier. Accuracy scores are obtained using 10-fold cross-validation for the MR, CR, SUBJ and MPQA datasets. For those datasets nested cross-validation is used to tune the L2 penalty. For the TREC dataset, as for the MRSP dataset, the L2 penalty is tuned on the predefined train split using 10-fold cross-validation, and the accuracy is computed on the test set.
Unsupervised Similarity Evaluation. We perform unsupervised evaluation of the learnt sentence embeddings using the sentence cosine similarity, on the STS 2014 BIBREF31 and SICK 2014 BIBREF32 datasets. These similarity scores are compared to the gold-standard human judgements using Pearson's INLINEFORM0 BIBREF33 and Spearman's INLINEFORM1 BIBREF34 correlation scores. The SICK dataset consists of about 10,000 sentence pairs along with relatedness scores of the pairs. The STS 2014 dataset contains 3,770 pairs, divided into six different categories on the basis of the origin of sentences/phrases, namely Twitter, headlines, news, forum, WordNet and images.
Results and Discussion
In Tables TABREF18 and TABREF19 , we compare our results with those obtained by BIBREF16 on different models. Table TABREF21 in the last column shows the dramatic improvement in training time of our models (and other C-BOW-inspired models) in contrast to neural network based models. All our Sent2Vec models are trained on a machine with 2x Intel Xeon E5 INLINEFORM0 2680v3, 12 cores @2.5GHz.
Along with the models discussed in Section SECREF3 , this also includes the sentence embedding baselines obtained by simple averaging of word embeddings over the sentence, in both the C-BOW and skip-gram variants. TF-IDF BOW is a representation consisting of the counts of the 200,000 most common feature-words, weighed by their TF-IDF frequencies. To ensure coherence, we only include unsupervised models in the main paper. Performance of supervised and semi-supervised models on these evaluations can be observed in Tables TABREF29 and TABREF30 in the supplementary material.
Downstream Supervised Evaluation Results. On running supervised evaluations and observing the results in Table TABREF18 , we find that on an average our models are second only to SkipThought vectors. Also, both our models achieve state of the art results on the CR task. We also observe that on half of the supervised tasks, our unigrams + bigram model is the best model after SkipThought. Our models are weaker on the MSRP task (which consists of the identification of labelled paraphrases) compared to state-of-the-art methods. However, we observe that the models which perform very strongly on this task end up faring very poorly on the other tasks, indicating a lack of generalizability. On rest of the tasks, our models perform extremely well. The SkipThought model is able to outperform our models on most of the tasks as it is trained to predict the previous and next sentences and a lot of tasks are able to make use of this contextual information missing in our Sent2Vec models. For example, the TREC task is a poor measure of how one predicts the content of the sentence (the question) but a good measure of how the next sentence in the sequence (the answer) is predicted.
Unsupervised Similarity Evaluation Results. In Table TABREF19 , we see that our Sent2Vec models are state-of-the-art on the majority of tasks when comparing to all the unsupervised models trained on the Toronto corpus, and clearly achieve the best averaged performance. Our Sent2Vec models also on average outperform or are at par with the C-PHRASE model, despite significantly lagging behind on the STS 2014 WordNet and News subtasks. This observation can be attributed to the fact that a big chunk of the data that the C-PHRASE model is trained on comes from English Wikipedia, helping it to perform well on datasets involving definition and news items. Also, C-PHRASE uses data three times the size of the Toronto book corpus. Interestingly, our model outperforms C-PHRASE when trained on Wikipedia, as shown in Table TABREF21 , despite the fact that we use no parse tree information. Official STS 2017 benchmark. In the official results of the most recent edition of the STS 2017 benchmark BIBREF35 , our model also significantly outperforms C-PHRASE, and in fact delivers the best unsupervised baseline method.
For the Siamese C-BOW model trained on the Toronto corpus, supervised evaluation as well as similarity evaluation results on the SICK 2014 dataset are unavailable.
Macro Average. To summarize our contributions on both supervised and unsupervised tasks, in Table TABREF21 we present the results in terms of the macro average over the averages of both supervised and unsupervised tasks along with the training times of the models. For unsupervised tasks, averages are taken over both Spearman and Pearson scores. The comparison includes the best performing unsupervised and semi-supervised methods described in Section SECREF3 . For models trained on the Toronto books dataset, we report a 3.8 INLINEFORM0 points improvement over the state of the art. Considering all supervised, semi-supervised methods and all datasets compared in BIBREF16 , we report a 2.2 INLINEFORM1 points improvement.
We also see a noticeable improvement in accuracy as we use larger datasets like Twitter and Wikipedia. We furthermore see that the Sent2Vec models are faster to train when compared to methods like SkipThought and DictRep, owing to the SGD optimizer allowing a high degree of parallelizability.
We can clearly see Sent2Vec outperforming other unsupervised and even semi-supervised methods. This can be attributed to the superior generalizability of our model across supervised and unsupervised tasks.
Comparison with BIBREF4 . We also compare our work with BIBREF4 who also use additive compositionality to obtain sentence embeddings. However, in contrast to our model, they use fixed, pre-trained word embeddings to build a weighted average of these embeddings using unigram probabilities. While we couldn't find pre-trained state of the art word embeddings trained on the Toronto books corpus, we evaluated their method using GloVe embeddings obtained from the larger Common Crawl Corpus, which is 42 times larger than our twitter corpus, greatly favoring their method over ours.
In Table TABREF22 , we report an experimental comparison to their model on unsupervised tasks. In the table, the suffix W indicates that their down-weighting scheme has been used, while the suffix R indicates the removal of the first principal component. They report values of INLINEFORM0 as giving the best results and used INLINEFORM1 for all their experiments. We observe that our results are competitive with the embeddings of BIBREF4 for purely unsupervised methods. It is important to note that the scores obtained from supervised task-specific PSL embeddings trained for the purpose of semantic similarity outperform our method on both SICK and average STS 2014, which is expected as our model is trained purely unsupervised.
In order to facilitate a more detailed comparison, we also evaluated the unsupervised Glove + WR embeddings on downstream supervised tasks and compared them to our twitter models. To use BIBREF4 's method in a supervised setup, we precomputed and stored the common discourse vector INLINEFORM0 using 2 million random Wikipedia sentences. On an average, our models outperform their unsupervised models by a significant margin, this despite the fact that they used GloVe embeddings trained on larger corpora than ours (42 times larger). Our models also outperform their semi-supervised PSL + WR model. This indicates our model learns a more precise weighing scheme than the static one proposed by BIBREF4 .
The effect of datasets and n-grams. Despite being trained on three very different datasets, all of our models generalize well to sometimes very specific domains. Models trained on Toronto Corpus are the state-of-the-art on the STS 2014 images dataset even beating the supervised CaptionRep model trained on images. We also see that addition of bigrams to our models doesn't help much when it comes to unsupervised evaluations but gives a significant boost-up in accuracy on supervised tasks. We attribute this phenomenon to the ability of bigrams models to capture some non-compositional features missed by unigrams models. Having a single representation for “not good" or “very bad" can boost the supervised model's ability to infer relevant features for the corresponding classifier. For semantic similarity tasks however, the relative uniqueness of bigrams results in pushing sentence representations further apart, which can explain the average drop of scores for bigrams models on those tasks.
On learning the importance and the direction of the word vectors. Our model – by learning how to generate and compose word vectors – has to learn both the direction of the word embeddings as well as their norm. Considering the norms of the used word vectors as by our averaging over the sentence, we observe an interesting distribution of the “importance” of each word. In Figure FIGREF24 we show the profile of the INLINEFORM0 -norm as a function of INLINEFORM1 for each INLINEFORM2 , and compare it to the static down-weighting mechanism of BIBREF4 . We can observe that our model is learning to down-weight frequent tokens by itself. It is also down-weighting rare tokens and the INLINEFORM3 profile seems to roughly follow Luhn's hypothesis BIBREF36 , a well known information retrieval paradigm, stating that mid-rank terms are the most significant to discriminate content.
Conclusion
In this paper, we introduce a novel, computationally efficient, unsupervised, C-BOW-inspired method to train and infer sentence embeddings. On supervised evaluations, our method, on an average, achieves better performance than all other unsupervised competitors with the exception of SkipThought. However, SkipThought vectors show a very poor performance on sentence similarity tasks while our model is state-of-the-art for these evaluations on average. Also, our model is generalizable, extremely fast to train, simple to understand and easily interpretable, showing the relevance of simple and well-grounded representation models in contrast to the models using deep architectures. Future work could focus on augmenting the model to exploit data with ordered sentences. Furthermore, we would like to investigate the model's ability to use pre-trained embeddings for downstream transfer learning tasks.
L1 regularization of models
Optionally, our model can be additionally improved by adding an L1 regularizer term in the objective function, leading to slightly better generalization performance. Additionally, encouraging sparsity in the embedding vectors is beneficial for memory reasons, allowing higher embedding dimensions INLINEFORM0 .
We propose to apply L1 regularization individually to each word (and n-gram) vector (both source and target vectors). Formally, the training objective function ( EQREF10 ) then becomes DISPLAYFORM0
where INLINEFORM0 is the regularization parameter.
Now, in order to minimize a function of the form INLINEFORM0 where INLINEFORM1 is not differentiable over the domain, we can use the basic proximal-gradient scheme. In this iterative method, after doing a gradient descent step on INLINEFORM2 with learning rate INLINEFORM3 , we update INLINEFORM4 as DISPLAYFORM0
where INLINEFORM0 is called the proximal function BIBREF37 of INLINEFORM1 with INLINEFORM2 being the proximal parameter and INLINEFORM3 is the value of INLINEFORM4 after a gradient (or SGD) step on INLINEFORM5 .
In our case, INLINEFORM0 and the corresponding proximal operator is given by DISPLAYFORM0
where INLINEFORM0 corresponds to element-wise product.
Similar to the proximal-gradient scheme, in our case we can optionally use the thresholding operator on the updated word and n-gram vectors after an SGD step. The soft thresholding parameter used for this update is INLINEFORM0 and INLINEFORM1 for the source and target vectors respectively where INLINEFORM2 is the current learning rate, INLINEFORM3 is the INLINEFORM4 regularization parameter and INLINEFORM5 is the sentence on which SGD is being run.
We observe that INLINEFORM0 regularization using the proximal step gives our models a small boost in performance. Also, applying the thresholding operator takes only INLINEFORM1 floating point operations for the updating the word vectors corresponding to the sentence and INLINEFORM2 for updating the target as well as the negative word vectors, where INLINEFORM3 is the number of negatives sampled and INLINEFORM4 is the embedding dimension. Thus, performing INLINEFORM5 regularization using soft-thresholding operator comes with a small computational overhead.
We set INLINEFORM0 to be 0.0005 for both the Wikipedia and the Toronto Book Corpus unigrams + bigrams models. | Accuracy and F1 score for supervised tasks, Pearson's and Spearman's correlation for unsupervised tasks |
03f4e5ac5a9010191098d6d66ed9bbdfafcbd013 | 03f4e5ac5a9010191098d6d66ed9bbdfafcbd013_0 | Q: How do the n-gram features incorporate compositionality?
Text: Introduction
Improving unsupervised learning is of key importance for advancing machine learning methods, as to unlock access to almost unlimited amounts of data to be used as training resources. The majority of recent success stories of deep learning does not fall into this category but instead relied on supervised training (in particular in the vision domain). A very notable exception comes from the text and natural language processing domain, in the form of semantic word embeddings trained unsupervised BIBREF0 , BIBREF1 , BIBREF2 . Within only a few years from their invention, such word representations – which are based on a simple matrix factorization model as we formalize below – are now routinely trained on very large amounts of raw text data, and have become ubiquitous building blocks of a majority of current state-of-the-art NLP applications.
While very useful semantic representations are available for words, it remains challenging to produce and learn such semantic embeddings for longer pieces of text, such as sentences, paragraphs or entire documents. Even more so, it remains a key goal to learn such general-purpose representations in an unsupervised way.
Currently, two contrary research trends have emerged in text representation learning: On one hand, a strong trend in deep-learning for NLP leads towards increasingly powerful and complex models, such as recurrent neural networks (RNNs), LSTMs, attention models and even Neural Turing Machine architectures. While extremely strong in expressiveness, the increased model complexity makes such models much slower to train on larger datasets. On the other end of the spectrum, simpler “shallow” models such as matrix factorizations (or bilinear models) can benefit from training on much larger sets of data, which can be a key advantage, especially in the unsupervised setting.
Surprisingly, for constructing sentence embeddings, naively using averaged word vectors was shown to outperform LSTMs (see BIBREF3 for plain averaging, and BIBREF4 for weighted averaging). This example shows potential in exploiting the trade-off between model complexity and ability to process huge amounts of text using scalable algorithms, towards the simpler side. In view of this trade-off, our work here further advances unsupervised learning of sentence embeddings. Our proposed model can be seen as an extension of the C-BOW BIBREF0 , BIBREF1 training objective to train sentence instead of word embeddings. We demonstrate that the empirical performance of our resulting general-purpose sentence embeddings very significantly exceeds the state of the art, while keeping the model simplicity as well as training and inference complexity exactly as low as in averaging methods BIBREF3 , BIBREF4 , thereby also putting the work by BIBREF4 in perspective.
Contributions. The main contributions in this work can be summarized as follows:
Model
Our model is inspired by simple matrix factor models (bilinear models) such as recently very successfully used in unsupervised learning of word embeddings BIBREF0 , BIBREF1 , BIBREF2 , BIBREF5 as well as supervised of sentence classification BIBREF6 . More precisely, these models can all be formalized as an optimization problem of the form DISPLAYFORM0
for two parameter matrices INLINEFORM0 and INLINEFORM1 , where INLINEFORM2 denotes the vocabulary. Here, the columns of the matrix INLINEFORM3 represent the learnt source word vectors whereas those of INLINEFORM4 represent the target word vectors. For a given sentence INLINEFORM5 , which can be of arbitrary length, the indicator vector INLINEFORM6 is a binary vector encoding INLINEFORM7 (bag of words encoding).
Fixed-length context windows INLINEFORM0 running over the corpus are used in word embedding methods as in C-BOW BIBREF0 , BIBREF1 and GloVe BIBREF2 . Here we have INLINEFORM1 and each cost function INLINEFORM2 only depends on a single row of its input, describing the observed target word for the given fixed-length context INLINEFORM3 . In contrast, for sentence embeddings which are the focus of our paper here, INLINEFORM4 will be entire sentences or documents (therefore variable length). This property is shared with the supervised FastText classifier BIBREF6 , which however uses soft-max with INLINEFORM5 being the number of class labels.
Proposed Unsupervised Model
We propose a new unsupervised model, Sent2Vec, for learning universal sentence embeddings. Conceptually, the model can be interpreted as a natural extension of the word-contexts from C-BOW BIBREF0 , BIBREF1 to a larger sentence context, with the sentence words being specifically optimized towards additive combination over the sentence, by means of the unsupervised objective function.
Formally, we learn a source (or context) embedding INLINEFORM0 and target embedding INLINEFORM1 for each word INLINEFORM2 in the vocabulary, with embedding dimension INLINEFORM3 and INLINEFORM4 as in ( EQREF6 ). The sentence embedding is defined as the average of the source word embeddings of its constituent words, as in ( EQREF8 ). We augment this model furthermore by also learning source embeddings for not only unigrams but also n-grams present in each sentence, and averaging the n-gram embeddings along with the words, i.e., the sentence embedding INLINEFORM5 for INLINEFORM6 is modeled as DISPLAYFORM0
where INLINEFORM0 is the list of n-grams (including unigrams) present in sentence INLINEFORM1 . In order to predict a missing word from the context, our objective models the softmax output approximated by negative sampling following BIBREF0 . For the large number of output classes INLINEFORM2 to be predicted, negative sampling is known to significantly improve training efficiency, see also BIBREF7 . Given the binary logistic loss function INLINEFORM3 coupled with negative sampling, our unsupervised training objective is formulated as follows: INLINEFORM4
where INLINEFORM0 corresponds to the current sentence and INLINEFORM1 is the set of words sampled negatively for the word INLINEFORM2 . The negatives are sampled following a multinomial distribution where each word INLINEFORM5 is associated with a probability INLINEFORM6 , where INLINEFORM7 is the normalized frequency of INLINEFORM8 in the corpus.
To select the possible target unigrams (positives), we use subsampling as in BIBREF6 , BIBREF5 , each word INLINEFORM0 being discarded with probability INLINEFORM1 where INLINEFORM2 . Where INLINEFORM3 is the subsampling hyper-parameter. Subsampling prevents very frequent words of having too much influence in the learning as they would introduce strong biases in the prediction task. With positives subsampling and respecting the negative sampling distribution, the precise training objective function becomes DISPLAYFORM0
Computational Efficiency
In contrast to more complex neural network based models, one of the core advantages of the proposed technique is the low computational cost for both inference and training. Given a sentence INLINEFORM0 and a trained model, computing the sentence representation INLINEFORM1 only requires INLINEFORM2 floating point operations (or INLINEFORM3 to be precise for the n-gram case, see ( EQREF8 )), where INLINEFORM4 is the embedding dimension. The same holds for the cost of training with SGD on the objective ( EQREF10 ), per sentence seen in the training corpus. Due to the simplicity of the model, parallel training is straight-forward using parallelized or distributed SGD.
Also, in order to store higher-order n-grams efficiently, we use the standard hashing trick, see e.g. BIBREF8 , with the same hashing function as used in FastText BIBREF6 , BIBREF5 .
Comparison to C-BOW
C-BOW BIBREF0 , BIBREF1 aims to predict a chosen target word given its fixed-size context window, the context being defined by the average of the vectors associated with the words at a distance less than the window size hyper-parameter INLINEFORM0 . If our system, when restricted to unigram features, can be seen as an extension of C-BOW where the context window includes the entire sentence, in practice there are few important differences as C-BOW uses important tricks to facilitate the learning of word embeddings. C-BOW first uses frequent word subsampling on the sentences, deciding to discard each token INLINEFORM1 with probability INLINEFORM2 or alike (small variations exist across implementations). Subsampling prevents the generation of n-grams features, and deprives the sentence of an important part of its syntactical features. It also shortens the distance between subsampled words, implicitly increasing the span of the context window. A second trick consists of using dynamic context windows: for each subsampled word INLINEFORM3 , the size of its associated context window is sampled uniformly between 1 and INLINEFORM4 . Using dynamic context windows is equivalent to weighing by the distance from the focus word INLINEFORM5 divided by the window size BIBREF9 . This makes the prediction task local, and go against our objective of creating sentence embeddings as we want to learn how to compose all n-gram features present in a sentence. In the results section, we report a significant improvement of our method over C-BOW.
Model Training
Three different datasets have been used to train our models: the Toronto book corpus, Wikipedia sentences and tweets. The Wikipedia and Toronto books sentences have been tokenized using the Stanford NLP library BIBREF10 , while for tweets we used the NLTK tweets tokenizer BIBREF11 . For training, we select a sentence randomly from the dataset and then proceed to select all the possible target unigrams using subsampling. We update the weights using SGD with a linearly decaying learning rate.
Also, to prevent overfitting, for each sentence we use dropout on its list of n-grams INLINEFORM0 , where INLINEFORM1 is the set of all unigrams contained in sentence INLINEFORM2 . After empirically trying multiple dropout schemes, we find that dropping INLINEFORM3 n-grams ( INLINEFORM4 ) for each sentence is giving superior results compared to dropping each token with some fixed probability. This dropout mechanism would negatively impact shorter sentences. The regularization can be pushed further by applying L1 regularization to the word vectors. Encouraging sparsity in the embedding vectors is particularly beneficial for high dimension INLINEFORM5 . The additional soft thresholding in every SGD step adds negligible computational cost. See also Appendix SECREF8 . We train two models on each dataset, one with unigrams only and one with unigrams and bigrams. All training parameters for the models are provided in Table TABREF25 in the supplementary material. Our C++ implementation builds upon the FastText library BIBREF6 , BIBREF5 . We will make our code and pre-trained models available open-source.
Related Work
We discuss existing models which have been proposed to construct sentence embeddings. While there is a large body of works in this direction – several among these using e.g. labelled datasets of paraphrase pairs to obtain sentence embeddings in a supervised manner BIBREF12 , BIBREF3 , BIBREF13 to learn sentence embeddings – we here focus on unsupervised, task-independent models. While some methods require ordered raw text i.e., a coherent corpus where the next sentence is a logical continuation of the previous sentence, others rely only on raw text i.e., an unordered collection of sentences. Finally, we also discuss alternative models built from structured data sources.
Unsupervised Models Independent of Sentence Ordering
The ParagraphVector DBOW model BIBREF14 is a log-linear model which is trained to learn sentence as well as word embeddings and then use a softmax distribution to predict words contained in the sentence given the sentence vector representation. They also propose a different model ParagraphVector DM where they use n-grams of consecutive words along with the sentence vector representation to predict the next word.
BIBREF15 also presented an early approach to obtain compositional embeddings from word vectors. They use different compositional techniques including static averaging or Fisher vectors of a multivariate Gaussian to obtain sentence embeddings from word2vec models.
BIBREF16 propose a Sequential (Denoising) Autoencoder, S(D)AE. This model first introduces noise in the input data: Firstly each word is deleted with probability INLINEFORM0 , then for each non-overlapping bigram, words are swapped with probability INLINEFORM1 . The model then uses an LSTM-based architecture to retrieve the original sentence from the corrupted version. The model can then be used to encode new sentences into vector representations. In the case of INLINEFORM2 , the model simply becomes a Sequential Autoencoder. BIBREF16 also propose a variant (S(D)AE + embs.) in which the words are represented by fixed pre-trained word vector embeddings.
BIBREF4 propose a model in which sentences are represented as a weighted average of fixed (pre-trained) word vectors, followed by post-processing step of subtracting the principal component. Using the generative model of BIBREF17 , words are generated conditioned on a sentence “discourse” vector INLINEFORM0 : INLINEFORM1
where INLINEFORM0 and INLINEFORM1 and INLINEFORM2 , INLINEFORM3 are scalars. INLINEFORM4 is the common discourse vector, representing a shared component among all discourses, mainly related to syntax. It allows the model to better generate syntactical features. The INLINEFORM5 term is here to enable the model to generate some frequent words even if their matching with the discourse vector INLINEFORM6 is low.
Therefore, this model tries to generate sentences as a mixture of three type of words: words matching the sentence discourse vector INLINEFORM0 , syntactical words matching INLINEFORM1 , and words with high INLINEFORM2 . BIBREF4 demonstrated that for this model, the MLE of INLINEFORM3 can be approximated by INLINEFORM4 , where INLINEFORM5 is a scalar. The sentence discourse vector can hence be obtained by subtracting INLINEFORM6 estimated by the first principal component of INLINEFORM7 's on a set of sentences. In other words, the sentence embeddings are obtained by a weighted average of the word vectors stripping away the syntax by subtracting the common discourse vector and down-weighting frequent tokens. They generate sentence embeddings from diverse pre-trained word embeddings among which are unsupervised word embeddings such as GloVe BIBREF2 as well as supervised word embeddings such as paragram-SL999 (PSL) BIBREF18 trained on the Paraphrase Database BIBREF19 .
In a very different line of work, C-PHRASE BIBREF20 relies on additional information from the syntactic parse tree of each sentence, which is incorporated into the C-BOW training objective.
BIBREF21 show that single layer CNNs can be modeled using a tensor decomposition approach. While building on an unsupervised objective, the employed dictionary learning step for obtaining phrase templates is task-specific (for each use-case), not resulting in general-purpose embeddings.
Unsupervised Models Depending on Sentence Ordering
The SkipThought model BIBREF22 combines sentence level models with recurrent neural networks. Given a sentence INLINEFORM0 from an ordered corpus, the model is trained to predict INLINEFORM1 and INLINEFORM2 .
FastSent BIBREF16 is a sentence-level log-linear bag-of-words model. Like SkipThought, it uses adjacent sentences as the prediction target and is trained in an unsupervised fashion. Using word sequences allows the model to improve over the earlier work of paragraph2vec BIBREF14 . BIBREF16 augment FastSent further by training it to predict the constituent words of the sentence as well. This model is named FastSent + AE in our comparisons.
Compared to our approach, Siamese C-BOW BIBREF23 shares the idea of learning to average word embeddings over a sentence. However, it relies on a Siamese neural network architecture to predict surrounding sentences, contrasting our simpler unsupervised objective.
Note that on the character sequence level instead of word sequences, FastText BIBREF5 uses the same conceptual model to obtain better word embeddings. This is most similar to our proposed model, with two key differences: Firstly, we predict from source word sequences to target words, as opposed to character sequences to target words, and secondly, our model is averaging the source embeddings instead of summing them.
Models requiring structured data
DictRep BIBREF24 is trained to map dictionary definitions of the words to the pre-trained word embeddings of these words. They use two different architectures, namely BOW and RNN (LSTM) with the choice of learning the input word embeddings or using them pre-trained. A similar architecture is used by the CaptionRep variant, but here the task is the mapping of given image captions to a pre-trained vector representation of these images.
Evaluation Tasks
We use a standard set of supervised as well as unsupervised benchmark tasks from the literature to evaluate our trained models, following BIBREF16 . The breadth of tasks allows to fairly measure generalization to a wide area of different domains, testing the general-purpose quality (universality) of all competing sentence embeddings. For downstream supervised evaluations, sentence embeddings are combined with logistic regression to predict target labels. In the unsupervised evaluation for sentence similarity, correlation of the cosine similarity between two embeddings is compared to human annotators.
Downstream Supervised Evaluation. Sentence embeddings are evaluated for various supervised classification tasks as follows. We evaluate paraphrase identification (MSRP) BIBREF25 , classification of movie review sentiment (MR) BIBREF26 , product reviews (CR) BIBREF27 , subjectivity classification (SUBJ) BIBREF28 , opinion polarity (MPQA) BIBREF29 and question type classification (TREC) BIBREF30 . To classify, we use the code provided by BIBREF22 in the same manner as in BIBREF16 . For the MSRP dataset, containing pairs of sentences INLINEFORM0 with associated paraphrase label, we generate feature vectors by concatenating their Sent2Vec representations INLINEFORM1 with the component-wise product INLINEFORM2 . The predefined training split is used to tune the L2 penalty parameter using cross-validation and the accuracy and F1 scores are computed on the test set. For the remaining 5 datasets, Sent2Vec embeddings are inferred from input sentences and directly fed to a logistic regression classifier. Accuracy scores are obtained using 10-fold cross-validation for the MR, CR, SUBJ and MPQA datasets. For those datasets nested cross-validation is used to tune the L2 penalty. For the TREC dataset, as for the MRSP dataset, the L2 penalty is tuned on the predefined train split using 10-fold cross-validation, and the accuracy is computed on the test set.
Unsupervised Similarity Evaluation. We perform unsupervised evaluation of the learnt sentence embeddings using the sentence cosine similarity, on the STS 2014 BIBREF31 and SICK 2014 BIBREF32 datasets. These similarity scores are compared to the gold-standard human judgements using Pearson's INLINEFORM0 BIBREF33 and Spearman's INLINEFORM1 BIBREF34 correlation scores. The SICK dataset consists of about 10,000 sentence pairs along with relatedness scores of the pairs. The STS 2014 dataset contains 3,770 pairs, divided into six different categories on the basis of the origin of sentences/phrases, namely Twitter, headlines, news, forum, WordNet and images.
Results and Discussion
In Tables TABREF18 and TABREF19 , we compare our results with those obtained by BIBREF16 on different models. Table TABREF21 in the last column shows the dramatic improvement in training time of our models (and other C-BOW-inspired models) in contrast to neural network based models. All our Sent2Vec models are trained on a machine with 2x Intel Xeon E5 INLINEFORM0 2680v3, 12 cores @2.5GHz.
Along with the models discussed in Section SECREF3 , this also includes the sentence embedding baselines obtained by simple averaging of word embeddings over the sentence, in both the C-BOW and skip-gram variants. TF-IDF BOW is a representation consisting of the counts of the 200,000 most common feature-words, weighed by their TF-IDF frequencies. To ensure coherence, we only include unsupervised models in the main paper. Performance of supervised and semi-supervised models on these evaluations can be observed in Tables TABREF29 and TABREF30 in the supplementary material.
Downstream Supervised Evaluation Results. On running supervised evaluations and observing the results in Table TABREF18 , we find that on an average our models are second only to SkipThought vectors. Also, both our models achieve state of the art results on the CR task. We also observe that on half of the supervised tasks, our unigrams + bigram model is the best model after SkipThought. Our models are weaker on the MSRP task (which consists of the identification of labelled paraphrases) compared to state-of-the-art methods. However, we observe that the models which perform very strongly on this task end up faring very poorly on the other tasks, indicating a lack of generalizability. On rest of the tasks, our models perform extremely well. The SkipThought model is able to outperform our models on most of the tasks as it is trained to predict the previous and next sentences and a lot of tasks are able to make use of this contextual information missing in our Sent2Vec models. For example, the TREC task is a poor measure of how one predicts the content of the sentence (the question) but a good measure of how the next sentence in the sequence (the answer) is predicted.
Unsupervised Similarity Evaluation Results. In Table TABREF19 , we see that our Sent2Vec models are state-of-the-art on the majority of tasks when comparing to all the unsupervised models trained on the Toronto corpus, and clearly achieve the best averaged performance. Our Sent2Vec models also on average outperform or are at par with the C-PHRASE model, despite significantly lagging behind on the STS 2014 WordNet and News subtasks. This observation can be attributed to the fact that a big chunk of the data that the C-PHRASE model is trained on comes from English Wikipedia, helping it to perform well on datasets involving definition and news items. Also, C-PHRASE uses data three times the size of the Toronto book corpus. Interestingly, our model outperforms C-PHRASE when trained on Wikipedia, as shown in Table TABREF21 , despite the fact that we use no parse tree information. Official STS 2017 benchmark. In the official results of the most recent edition of the STS 2017 benchmark BIBREF35 , our model also significantly outperforms C-PHRASE, and in fact delivers the best unsupervised baseline method.
For the Siamese C-BOW model trained on the Toronto corpus, supervised evaluation as well as similarity evaluation results on the SICK 2014 dataset are unavailable.
Macro Average. To summarize our contributions on both supervised and unsupervised tasks, in Table TABREF21 we present the results in terms of the macro average over the averages of both supervised and unsupervised tasks along with the training times of the models. For unsupervised tasks, averages are taken over both Spearman and Pearson scores. The comparison includes the best performing unsupervised and semi-supervised methods described in Section SECREF3 . For models trained on the Toronto books dataset, we report a 3.8 INLINEFORM0 points improvement over the state of the art. Considering all supervised, semi-supervised methods and all datasets compared in BIBREF16 , we report a 2.2 INLINEFORM1 points improvement.
We also see a noticeable improvement in accuracy as we use larger datasets like Twitter and Wikipedia. We furthermore see that the Sent2Vec models are faster to train when compared to methods like SkipThought and DictRep, owing to the SGD optimizer allowing a high degree of parallelizability.
We can clearly see Sent2Vec outperforming other unsupervised and even semi-supervised methods. This can be attributed to the superior generalizability of our model across supervised and unsupervised tasks.
Comparison with BIBREF4 . We also compare our work with BIBREF4 who also use additive compositionality to obtain sentence embeddings. However, in contrast to our model, they use fixed, pre-trained word embeddings to build a weighted average of these embeddings using unigram probabilities. While we couldn't find pre-trained state of the art word embeddings trained on the Toronto books corpus, we evaluated their method using GloVe embeddings obtained from the larger Common Crawl Corpus, which is 42 times larger than our twitter corpus, greatly favoring their method over ours.
In Table TABREF22 , we report an experimental comparison to their model on unsupervised tasks. In the table, the suffix W indicates that their down-weighting scheme has been used, while the suffix R indicates the removal of the first principal component. They report values of INLINEFORM0 as giving the best results and used INLINEFORM1 for all their experiments. We observe that our results are competitive with the embeddings of BIBREF4 for purely unsupervised methods. It is important to note that the scores obtained from supervised task-specific PSL embeddings trained for the purpose of semantic similarity outperform our method on both SICK and average STS 2014, which is expected as our model is trained purely unsupervised.
In order to facilitate a more detailed comparison, we also evaluated the unsupervised Glove + WR embeddings on downstream supervised tasks and compared them to our twitter models. To use BIBREF4 's method in a supervised setup, we precomputed and stored the common discourse vector INLINEFORM0 using 2 million random Wikipedia sentences. On an average, our models outperform their unsupervised models by a significant margin, this despite the fact that they used GloVe embeddings trained on larger corpora than ours (42 times larger). Our models also outperform their semi-supervised PSL + WR model. This indicates our model learns a more precise weighing scheme than the static one proposed by BIBREF4 .
The effect of datasets and n-grams. Despite being trained on three very different datasets, all of our models generalize well to sometimes very specific domains. Models trained on Toronto Corpus are the state-of-the-art on the STS 2014 images dataset even beating the supervised CaptionRep model trained on images. We also see that addition of bigrams to our models doesn't help much when it comes to unsupervised evaluations but gives a significant boost-up in accuracy on supervised tasks. We attribute this phenomenon to the ability of bigrams models to capture some non-compositional features missed by unigrams models. Having a single representation for “not good" or “very bad" can boost the supervised model's ability to infer relevant features for the corresponding classifier. For semantic similarity tasks however, the relative uniqueness of bigrams results in pushing sentence representations further apart, which can explain the average drop of scores for bigrams models on those tasks.
On learning the importance and the direction of the word vectors. Our model – by learning how to generate and compose word vectors – has to learn both the direction of the word embeddings as well as their norm. Considering the norms of the used word vectors as by our averaging over the sentence, we observe an interesting distribution of the “importance” of each word. In Figure FIGREF24 we show the profile of the INLINEFORM0 -norm as a function of INLINEFORM1 for each INLINEFORM2 , and compare it to the static down-weighting mechanism of BIBREF4 . We can observe that our model is learning to down-weight frequent tokens by itself. It is also down-weighting rare tokens and the INLINEFORM3 profile seems to roughly follow Luhn's hypothesis BIBREF36 , a well known information retrieval paradigm, stating that mid-rank terms are the most significant to discriminate content.
Conclusion
In this paper, we introduce a novel, computationally efficient, unsupervised, C-BOW-inspired method to train and infer sentence embeddings. On supervised evaluations, our method, on an average, achieves better performance than all other unsupervised competitors with the exception of SkipThought. However, SkipThought vectors show a very poor performance on sentence similarity tasks while our model is state-of-the-art for these evaluations on average. Also, our model is generalizable, extremely fast to train, simple to understand and easily interpretable, showing the relevance of simple and well-grounded representation models in contrast to the models using deep architectures. Future work could focus on augmenting the model to exploit data with ordered sentences. Furthermore, we would like to investigate the model's ability to use pre-trained embeddings for downstream transfer learning tasks.
L1 regularization of models
Optionally, our model can be additionally improved by adding an L1 regularizer term in the objective function, leading to slightly better generalization performance. Additionally, encouraging sparsity in the embedding vectors is beneficial for memory reasons, allowing higher embedding dimensions INLINEFORM0 .
We propose to apply L1 regularization individually to each word (and n-gram) vector (both source and target vectors). Formally, the training objective function ( EQREF10 ) then becomes DISPLAYFORM0
where INLINEFORM0 is the regularization parameter.
Now, in order to minimize a function of the form INLINEFORM0 where INLINEFORM1 is not differentiable over the domain, we can use the basic proximal-gradient scheme. In this iterative method, after doing a gradient descent step on INLINEFORM2 with learning rate INLINEFORM3 , we update INLINEFORM4 as DISPLAYFORM0
where INLINEFORM0 is called the proximal function BIBREF37 of INLINEFORM1 with INLINEFORM2 being the proximal parameter and INLINEFORM3 is the value of INLINEFORM4 after a gradient (or SGD) step on INLINEFORM5 .
In our case, INLINEFORM0 and the corresponding proximal operator is given by DISPLAYFORM0
where INLINEFORM0 corresponds to element-wise product.
Similar to the proximal-gradient scheme, in our case we can optionally use the thresholding operator on the updated word and n-gram vectors after an SGD step. The soft thresholding parameter used for this update is INLINEFORM0 and INLINEFORM1 for the source and target vectors respectively where INLINEFORM2 is the current learning rate, INLINEFORM3 is the INLINEFORM4 regularization parameter and INLINEFORM5 is the sentence on which SGD is being run.
We observe that INLINEFORM0 regularization using the proximal step gives our models a small boost in performance. Also, applying the thresholding operator takes only INLINEFORM1 floating point operations for the updating the word vectors corresponding to the sentence and INLINEFORM2 for updating the target as well as the negative word vectors, where INLINEFORM3 is the number of negatives sampled and INLINEFORM4 is the embedding dimension. Thus, performing INLINEFORM5 regularization using soft-thresholding operator comes with a small computational overhead.
We set INLINEFORM0 to be 0.0005 for both the Wikipedia and the Toronto Book Corpus unigrams + bigrams models. | by also learning source embeddings for not only unigrams but also n-grams present in each sentence, and averaging the n-gram embeddings along with the words |
9a9338d0e74fd315af643335e733445031bd7656 | 9a9338d0e74fd315af643335e733445031bd7656_0 | Q: Which dataset do they use?
Text: Introduction
Language models (LMs) are crucial components in many applications, such as speech recognition and machine translation. The aim of language models is to compute the probability of any given sentence INLINEFORM0 , which can be calculated as DISPLAYFORM0
The task of LMs is to calculate the probability of word INLINEFORM0 given its previous history INLINEFORM1 . INLINEFORM2 -gram LMs BIBREF0 and neural network based language mdoels (NNLMs) BIBREF1 , BIBREF2 are two widely used language models. In INLINEFORM3 -gram LMs, the most recent INLINEFORM4 words are used as an approximation of the complete history, thus DISPLAYFORM0
This INLINEFORM0 -gram assumption can also be used to construct a INLINEFORM1 -gram feedforward NNLMs BIBREF1 . In contrast, recurrent neural network LMs (RNNLMs) model the complete history via a recurrent connection.
Most of previous work on language models has focused on utilising history information, the future word context information has not been extensively investigated. There have been several attempts to incorporate future context information into recurrent neural network language models. Individual forward and backward RNNLMs can be built, and these two LMs combined with a log-linear interpolation BIBREF3 . In BIBREF4 , succeeding words were incorporated into RNNLM within a Maximum Entropy framework. BIBREF5 investigated the use of bidirectional RNNLMs (bi-RNNLMs) for speech recognition. For a broadcast news task, sigmoid based RNNLMs gave small gains, while no performance improvement was obtained when using long short-term memory (LSTM) based RNNLMs. More recently, bi-RNNLMs can produce consistent, and significant, performance improvements over unidirectional RNNLMs (uni-RNNLMs) on a range of speech recognition tasks BIBREF6 .
Though they can yield performance gain, bi-RNNLMs pose several challenges for both model training and inference as they require the complete previous and future word context information to be taken into account. It is difficult to parallelise training efficiently. Lattice rescoring is also complicated for these LMs as future context needs to be incorporated. This means that the form of approximation used for uni-RNNLMs BIBREF7 is not suitable to apply. Hence, N-best rescoring is normally used BIBREF4 , BIBREF5 , BIBREF6 . However, the ability to manipulate lattices is very important in many speech applications. Lattices can be used for a wide range of downstream applications, such as confidence score estimation BIBREF8 , keyword search BIBREF9 and confusion network decoding BIBREF10 . In order to address these issues, a novel model structure, succeeding word RNNLMs (su-RNNLMs), is proposed in this paper. Instead of using a recurrent unit to capture the complete future word context as in bi-RNNLMs, a feedforward unit is used to model a small, fixed-length number of succeeding words. This allows existing efficient training BIBREF11 and lattice rescoring BIBREF7 algorithms developed for uni-RNNLMs to be extended to the proposed su-RNNLMs. Using these extended algorithms, compact lattices can be generated with su-RNNLMs supporting lattice based downstream processing.
The rest of this paper is organized as follows. Section SECREF2 gives a brief review of RNNLMs, including both unidirectional and bidirectional RNNLMs. The proposed model with succeeding words (su-RNNLMs) is introduced in Section SECREF3 , followed by a description of the lattice rescoring algorithm in Section SECREF4 . Section SECREF5 discusses the interpolation of language models. The experimental results are presented in Section SECREF6 and conclusions are drawn in Section SECREF7 .
Unidirectional RNNLMs
In contrast to feedforward NNLMs, where only modeling the previous INLINEFORM0 words, recurrent NNLMs BIBREF12 represent the full non-truncated history INLINEFORM1 for word INLINEFORM2 using the 1-of-K encoding of the previous word INLINEFORM3 and a continuous vector INLINEFORM4 as a compact representation of the remaining context INLINEFORM5 . Figure FIGREF5 shows an example of this unidirectional RNNLM (uni-RNNLM). The most recent word INLINEFORM6 is used as input and projected into a low-dimensional, continuous, space via a linear projection layer. A recurrent hidden layer is used after this projection layer. The form of the recurrent layer can be based on a standard sigmoid based recurrent unit, with sigmoid activations BIBREF2 , or more complicated forms such as gated recurrent unit (GRU) BIBREF13 and long short-term memory (LSTM) units BIBREF14 . A continuous vector INLINEFORM7 representing the complete history information INLINEFORM8 can be obtained using INLINEFORM9 and previous word INLINEFORM10 . This vector is used as input of recurrent layer for the estimation of next word. An output layer with softmax function is used to calculate the probability INLINEFORM11 . An additional node is often added at the output layer to model the probability mass of out-of-shortlist (OOS) words to speed up softmax computation by limiting vocabulary size BIBREF15 . Similarly, an out-of-vocabulary (OOV) node can be added in the input layer to model OOV words. The probability of word sequence INLINEFORM12 is calculated as, DISPLAYFORM0
Perplexity (PPL) is a metric used widely to evaluate the quality of language models. According to the definition in BIBREF16 , the perplexity can be computed based on sentence probability with, DISPLAYFORM0
Where INLINEFORM0 is the total number of words and INLINEFORM1 is the number of sentence in the evaluation corpus. INLINEFORM2 is the number of word in INLINEFORM3 th sentence. From the above equation, the PPL is calculated based on the average log probability of each word, which for unidirectional LMs, yields the average sentence log probability.
Uni-RNNLMs can be trained efficiently on Graphics Processing Units (GPUs) by using spliced sentence bunch (i.e. minibatch) mode BIBREF11 . Multiple sentences can be concatenated together to form a longer sequence and sets of these long sequences can then be aligned in parallel from left to right. This data structure is more efficient for minibatch based training as they have comparable sequence length BIBREF11 . When using these forms of language models for tasks like speech recognition, N-best rescoring is the most straightforward way to apply uni-RNNLMs. Lattice rescoring is also possible by introducing approximations BIBREF7 to control merging and expansion of different paths in lattice. This will be described in more detail in Section SECREF4 .
Bidirectional RNNLMs
Figure FIGREF8 illustrates an example of bidirectional RNNLMs (bi-RNNLMs). Unlike uni-RNNLMs, both the history word context INLINEFORM0 and future word context INLINEFORM1 are used to estimate the probability of current word INLINEFORM2 . Two recurrent units are used to capture the previous and future information respectively. In the same fashion as uni-RNNLMs, INLINEFORM3 is a compact continuous vector of the history information INLINEFORM4 . While INLINEFORM5 is another continuous vector to encode the future information INLINEFORM6 . This future context vector is computed from the next word INLINEFORM7 and the previous future context vector INLINEFORM8 containing information of INLINEFORM9 . The concatenation of INLINEFORM10 and INLINEFORM11 is then fed into the output layer, with softmax function, to calculate the output probability. In order to reduce the number of parameter, the projection layer for the previous and future words are often shared.
The probability of word sequence INLINEFORM0 can be computed using bi-RNNLMs as, DISPLAYFORM0
INLINEFORM0 is the unnormalized sentence probability computed from the individual word probabilities of the bi-RNNLM. INLINEFORM1 is a sentence-level normalization term to ensure the sentence probability is appropriately normalized. This is defined as, DISPLAYFORM0
where INLINEFORM0 is the set of all possible sentences. Unfortunately, this normalization term is impractical to calculate for most tasks.
In a similar form to Equation EQREF6 , the PPL of bi-RNNLMs can be calculated based on sentence probability as, DISPLAYFORM0
However, INLINEFORM0 is often infeasible to obtain. As a result, it is not possible to compute a valid perplexity from bi-RNNLMs. Nevertheless, the average log probability of each word can be used to get a “pseudo” perplexity (PPL). DISPLAYFORM0
This is the second term of the valid PPL of bi-RNNLMs shown in Equation EQREF11 . It is a “pseudo” PPL because the normalized sentence probability INLINEFORM0 is impossible to obtain and the unnormalized sentence probability INLINEFORM1 is used instead. Hence, the “pseudo” PPL of bi-RNNLMs is not comparable with the valid PPL of uni-RNNLMs. However, the value of “pseudo” PPL provides information on the average word probability from bi-RNNLMs since it is obtained using the word probability.
In order to achieve good performance for speech recognition, BIBREF6 proposed an additional smoothing of the bi-RNNLM probability at test time. The probability of bi-RNNLMs is smoothed as, DISPLAYFORM0
where INLINEFORM0 is the activation before softmax function for node INLINEFORM1 in the output layer. INLINEFORM2 is an empirical smoothing factor, which is chosen as 0.7 in this paper.
The use of both preceding and following context information in bi-RNNLMs presents challenges to both model training and inference. First, N-best rescoring is normally used for speech recognition BIBREF6 . Lattice rescoring is impractical for bi-RNNLMs as the computation of word probabilities requires information from the complete sentence.
Another drawback of bi-RNNLMs is the difficulty in training. The complete previous and future context information is required to predict the probability of each word. It is expensive to directly training bi-RNNLMs sentence by sentence, and difficult to parallelise the training for efficiency. In BIBREF5 , all sentences in the training corpus were concatenated together to form a single sequence to facilitate minibatch based training. This sequence was then “chopped” into sub-sequences with the average sentence length. Bi-RNNLMs were then trained on GPU by processing multiple sequences at the same time. This allows bi-RNNLMs to be efficiently trained. However, issues can arise from the random cutting of sentences, history and future context vectors may be reset in the middle of a sentence. In BIBREF6 , the bi-RNNLMs were trained in a more consistent fashion. Multiple sentences were aligned from left to right to form minibatches during bi-RNNLM training. In order to handle issues caused by variable sentence length, NULL tokens were appended to the ends of sentences to ensure that the aligned sentences had the same length. These NULL tokens were not used for parameter update. In this paper, this approach is adopted to train bi-RNNLMs as it gave better performance.
RNNLMs with succeeding words
As discussed above, bi-RNNLMs are slow to train and difficult to use in lattice rescoring. In order to address these issues, a novel structure, the su-RNNLM, is proposed in this paper to incorporate future context information. The model structure is illustrated in Figure FIGREF14 . In the same fashion as bi-RNNLMs, the previous history INLINEFORM0 is modeled with recurrent units (e.g. LSTM, GRU). However, instead of modeling the complete future context information, INLINEFORM1 , using recurrent units, feedforward units are used to capture a finite number of succeeding words, INLINEFORM2 . The softmax function is again applied at the output layer to obtain the probability of the current word INLINEFORM3 . The word embedding in the projection layer are shared for all input words. When the succeeding words are beyond the sentence boundary, a vector of 0 is used as the word embedding vector. This is similar to the zero padding of the feedforward forward NNLMs at the beginning of each sentence BIBREF12 .
As the number of succeeding words is finite and fixed for each word, its succeeding words can be organized as a INLINEFORM0 -gram future context and used for minibatch mode training as in feedforward NNLMs BIBREF12 . Su-RNNLMs can then be trained efficiently in a similar fashion to uni-RNNLMs in a spliced sentence bunch mode BIBREF11 .
Compared with equations EQREF4 and EQREF9 , the probability of word sequence INLINEFORM0 can be computed as DISPLAYFORM0
Again, the sentence level normalization term INLINEFORM0 is difficult to compute and only “pseudo” PPL can be obtained. The probabilities of su-RNNLMs are also very sharp, which can be seen from the “pseudo” PPLs in Table TABREF27 in Section SECREF6 . Hence, the bi-RNNLM probability smoothing given in Equation EQREF13 is also required for su-RNNLMs to achieve good performance at evaluation time.
Lattice rescoring
Lattice rescoring with feedforward NNLMs is straightforward BIBREF12 whereas approximations are required for uni-RNNLMs lattice rescoring BIBREF7 , BIBREF17 . As mentioned in Section SECREF7 , N-best rescoring has previously been used for bi-RNNLMs. It is not practical for bi-RNNLMs to be used for lattice rescoring and generation as both the complete previous and future context information are required. However, lattices are very useful in many applications, such as confidence score estimation BIBREF8 , keyword search BIBREF9 and confusion network decoding BIBREF10 . In contrast, su-RNNLMs require a fixed number of succeeding words, instead of the complete future context information. From Figure FIGREF14 , su-RNNLMs can be viewed as a combination of uni-RNNLMs for history information and feedforward NNLMs for future context information. Hence, lattice rescoring is feasible for su-RNNLMs by extending the lattice rescoring algorithm of uni-RNNLMs by considering additional fixed length future contexts.
Lattice rescoring of uni-RNNLMs
In this paper, the INLINEFORM0 -gram approximation BIBREF7 based approach is used for uni-RNNLMs lattice rescoring. When considering merging of two paths, if their previous INLINEFORM1 words are identical, the two paths are viewed as “equivalent” and can be merged. This is illustrated in Figure FIGREF19 for the start node of word INLINEFORM2 . The history information from the best path is kept for the following RNNLM probability computation and the histories of all other paths are discarded. For example, the path INLINEFORM3 is kept and the other path INLINEFORM4 is discarded given arc INLINEFORM5 .
There are two types of approximation involved for uni-RNNLM lattice rescoring, which are the merge and cache approximations. The merge approximation controls the merging of two paths. In BIBREF7 , the first path reaching the node was kept and all other paths with the same INLINEFORM0 -gram history were discarded irrespective of the associated scores. This introduces inaccuracies in the RNNLM probability calculation. The merge approximation can be improved by keeping the path with the highest accumulated score. This is the approach adopted in this work. For fast probability lookup in lattice rescoring, INLINEFORM1 -gram probabilities can be cached using INLINEFORM2 words as a key. A similar approach can be used with RNNLM probabilities. In BIBREF7 , RNNLM probabilities were cached based on the previous INLINEFORM3 words, which is referred as cache approximation. Thus a word probability obtained from the cache may be derived from another history sharing the same INLINEFORM4 previous words. This introduces another inaccuracy. In order to avoid this inaccuracy yet maintain the efficiency, the cache approximation used in BIBREF7 is improved by adopting the complete history as key for caching RNNLM probabilities. Both modifications yielt small but consistent improvements over BIBREF7 on a range of tasks.
Lattice rescoring of su-RNNLMs
For lattice rescoring with su-RNNLMs, the INLINEFORM0 -gram approximation can be adopted and extended to support the future word context. In order to handle succeeding words correctly, paths will be merged only if the following succeeding words are identical. In this way, the path expansion is carried out in both directions. Any two paths with the same succeeding words and INLINEFORM1 previous words are merged.
Figure FIGREF18 shows part of an example lattice generated by a 2-gram LM. In order to apply uni-RNNLM lattice rescoring using a 3-gram approximation, the grey shaded node in Figure FIGREF18 needs to be duplicated as word INLINEFORM0 has two distinct 3-gram histories, which are INLINEFORM1 and INLINEFORM2 respectively. Figure FIGREF19 shows the lattice after rescoring using a uni-RNNLM with 3-gram approximation. In order to apply su-RNNLMs for lattice rescoring, the succeeding words also need to be taken into account. Figure FIGREF20 is the expanded lattice using a su-RNNLM with 1 succeeding word. The grey shaded nodes in Figure FIGREF19 need to be expanded further as they have distinct succeeding words. The blue shaded nodes in Figure FIGREF20 are the expanded node in the resulting lattice.
Using the INLINEFORM0 -gram history approximation and given INLINEFORM1 succeeding words, the lattice expansion process is effectively a INLINEFORM2 -gram lattice expansion for uni-RNNLMs. For larger value of INLINEFORM3 and INLINEFORM4 , the resulting lattices can be very large. This can be addressed by pruning the lattice and doing initial lattice expansion with a uni-RNNLM.
Language Model Interpolation
For unidirectional language models, such as INLINEFORM0 -gram model and uni-RNNLMs, the word probabilities are normally combined using linear interpolation, DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are the probabilities from INLINEFORM2 -gram and uni-RNN LMs respectively, INLINEFORM3 is the interpolation weight of uni-RNNLMs.
However, it is not valid to directly combine uni-LMs (e.g unidirectional INLINEFORM0 -gram LMs or RNNLMs) and bi-LMs (or su-LMs) using linear interpolation due to the sentence level normalisation term required for bi-LMs (or su-LMs) in Equation EQREF9 . As described in BIBREF6 , uni-LMs can be log-linearly interpolated with bi-LMs for speech recognition using, DISPLAYFORM0
where INLINEFORM0 is the appropriate normalisation term. The normalisation term can be discarded for speech recognition as it does not affect the hypothesis ranking. INLINEFORM1 and INLINEFORM2 are the probabilities from uni-LMs and bi-RNNLMs respectively. INLINEFORM3 is the log-linear interpolation weight of bi-RNNLMs. The issue of normalisation term in su-RNLMs is similar to that of bi-RNNLMs, as shown in Equation EQREF15 . Hence, log-linear interpolation can also be applied for the combination of su-RNNLMs and uni-LMs and is the approach used in this paper.
By default, linear interpolation is used to combine uni-RNNLMs and INLINEFORM0 -gram LMs. A two-stage interpolation is used when including bi-RNNLMs and su-RNNLMs. The uni-RNNLMs and INLINEFORM1 -gram LMs are first interpolated using linear interpolation. These linearly interpolated probabilities are then log-linearly interpolated with those of bi-RNNLMs (or su-RNNLMs).
Experiments
Experiments were conducted using the AMI IHM meeting corpus BIBREF18 to evaluated the speech recognition performance of various language models. The Kaldi training data configuration was used. A total of 78 hours of speech was used in acoustic model training. This consists of about 1M words of acoustic transcription. Eight meetings were excluded from the training set and used as the development and test sets.
The Kaldi acoustic model training recipe BIBREF19 featuring sequence training BIBREF20 was applied for deep neural network (DNN) training. CMLLR transformed MFCC features BIBREF21 were used as the input and 4000 clustered context dependent states were used as targets. The DNN was trained with 6 hidden layers, and each layer has 2048 hidden nodes.
The first part of the Fisher corpus, 13M words, was used for additional language modeling training data. A 49k word decoding vocabulary was used for all experiments. All LMs were trained on the combined (AMI+Fisher), 14M word in total. A 4-gram KN smoothed back-off LM without pruning was trained and used for lattices generation. GRU based recurrent units were used for all unidirectional and bidirectional RNNLMs . 512 hidden nodes were used in the hidden layer. An extended version of CUED-RNNLM BIBREF22 was developed for the training of uni-RNNLMs, bi-RNNLMs and su-RNNLMs. The related code and recipe will be available online . The linear interpolation weight INLINEFORM0 between 4-gram LMs and uni-RNNLMs was set to be 0.75 as it gave the best performance on the development data. The log-linear interpolation weight INLINEFORM1 for bi-RNNLMs (or su-RNNLMs) was 0.3. The probabilities of bi-RNNLMs and su-RNNLMs were smoothed with a smoothing factor 0.7 as suggested in BIBREF6 . The 3-gram approximation was applied for the history merging of uni-RNNLMs and su-RNNLMs during lattice rescoring and generation BIBREF7 .
Table TABREF26 shows the word error rates of the baseline system with 4-gram and uni-RNN LMs. Lattice rescoring and 100-best rescoring are applied to lattices generated by the 4-gram LM. As expected, uni-RNNLMs yield a significant performance improvement over 4-gram LMs. Lattice rescoring gives a comparable performance with 100-best rescoring. Confusion network (CN) decoding can be applied to lattices generated by uni-RNNLM lattice rescoring and additional performance improvements can be achieved. However, it is difficult to apply confusion network decoding to the 100-best .
Table TABREF27 gives the training speed measured with word per second (w/s) and (“pseudo”) PPLs of various RNNLMs with difference amounts of future word context. When the number of succeeding words is 0, this is the baseline uni-RNNLMs. When the number of succeeding words is set to INLINEFORM0 , a bi-RNNLM with complete future context information is used. It can be seen that su-RNNLMs give a comparable training speed as uni-RNNLMs. The additional computational load of the su-RNNLMs mainly come from the feedforward unit for succeeding words as shown in Figure FIGREF14 . The computation in this part is much less than that of other parts such as output layer and GRU layers. However, the training of su-RNNLMs is much faster than bi-RNNLMs as it is difficult to parallelise the training of bi-RNNLMs efficiently BIBREF6 . It is worth mentioning again that the PPLs of uni-RNNLMs can not be compared directly with the “pseudo” PPLs of bi-RNNLMs and su-RNNLMs. But both PPLs and “pseudo” PPLs reflect the average log probability of each word. From Table TABREF27 , with increasing number of succeeding words, the “pseudo” PPLs of the su-RNNLMs keeps decreasing, yielding comparable value as bi-RNNLMs.
Table TABREF28 gives the WER results of 100-best rescoring with various language models. For bi-RNNLMs (or su-RNNLMs), it is not possible to use linear interpolation. Thus a two stage approach is adopted as described in Section SECREF5 . This results in slight differences, second decimal place, between the uni-RNNLM case and the 0 future context su-RNNLM. The increasing number of the succeeding words consistently reduces the WER. With 1 succeeding word, the WERs were reduced by 0.2% absolutely. Su-RNNLMs with more than 2 succeeding words gave about 0.5% absolute WER reduction. Bi-RNNLMs (shown in the bottom line of Table TABREF28 ) outperform su-RNNLMs by 0.1% to 0.2%, as it is able to incorporate the complete future context information with recurrent connection.
Table TABREF29 shows the WERs of lattice rescoring using su-RNNLMs. The lattice rescoring algorithm described in Section SECREF4 was applied. Su-RNNLMs with 1 and 3 succeeding words were used for lattice rescoring. From Table TABREF29 , su-RNNLMs with 1 succeeding words give 0.2% WER reduction and using 3 succeeding words gives about 0.5% WER reduction. These results are consistent with the 100-best rescoring result in Table TABREF28 . Confusion network decoding can be applied on the rescored lattices and additional 0.3-0.4% WER performance improvements are obtained on dev and eval test sets.
Conclusions
In this paper, the use of future context information on neural network language models has been explored. A novel model structure is proposed to address the issues associated with bi-RNNLMs, such as slow train speed and difficulties in lattice rescoring. Instead of using a recurrent unit to capture the complete future information, a feedforward unit was used to model a finite number of succeeding words. The existing training and lattice rescoring algorithms for uni-RNNLMs are extended for the proposed su-RNNLMs. Experimental results show that su-RNNLMs achieved a slightly worse performances than bi-RNNLMs, but with much faster training speed. Furthermore, additional performance improvements can be obtained from lattice rescoring and subsequent confusion network decoding. Future work will examine improved pruning scheme to address the lattice expansion issues associated with larger future context. | AMI IHM meeting corpus |
3103502cf07726d3eeda34f31c0bdf1fc0ae964e | 3103502cf07726d3eeda34f31c0bdf1fc0ae964e_0 | Q: How do Zipf and Herdan-Heap's laws differ?
Text: Introduction
Statistical characterization of languages has been a field of study for decadesBIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. Even simple quantities, like letter frequency, can be used to decode simple substitution cryptogramsBIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. However, probably the most surprising result in the field is Zipf's law, which states that if one ranks words by their frequency in a large text, the resulting rank frequency distribution is approximately a power law, for all languages BIBREF0, BIBREF11. These kind of universal results have long piqued the interest of physicists and mathematicians, as well as linguistsBIBREF12, BIBREF13, BIBREF14. Indeed, a large amount of effort has been devoted to try to understand the origin of Zipf's law, in some cases arguing that it arises from the fact that texts carry information BIBREF15, all the way to arguing that it is the result of mere chance BIBREF16, BIBREF17. Another interesting characterization of texts is the Heaps-Herdan law, which describes how the vocabulary -that is, the set of different words- grows with the size of a text, the number of which, empirically, has been found to grow as a power of the text size BIBREF18, BIBREF19. It is worth noting that it has been argued that this law is a consequence Zipf's law. BIBREF20, BIBREF21
A different tool used to characterize texts is the adjacency (or co-ocurrence) network BIBREF22, BIBREF23, BIBREF24, BIBREF25. The nodes in this network represent the words in the text, and a link is placed between nodes if the corresponding words are adjacent in the text. These links can be directed -according to the order in which the words appear-, or undirected. In this work we study properties of the adjacency network of various texts in several languages, using undirected links. The advantage of representing the text as a network is that we can describe properties of the text using the tools of network theory BIBREF26. The simplest characterization of a network is its degree distribution, that is, the fraction of nodes with a given number of links, and we will see that this distribution is also a universal power law for all languages. As we argue ahead, this may follow from the fact that Zipf's law is satisfied.
Another interesting use for text statistics is to distinguish texts and languages. In particular, as occurs with letter frequencies, other more subtle statistics may be used to distinguish different languages, and beyond that, provide a metric to group languages into different families BIBREF27, BIBREF28, BIBREF29. In this paper we use the clustering coefficient BIBREF26 to show that even though the degree distribution of the adjacency matrices is common to all languages, the statistics of their clustering coefficients, while approximately similar for various texts in each language, appears to be different from one language to another.
We use different texts (see Appendix (SECREF8)) instead of a large single corpus for each language because clustering coefficients typically decrease as a function of the size of the networkBIBREF30. Actually, we must compare the statistics of the clustering coefficient in texts with adjacency networks of comparable sizes. In the following section we present the rank vs frequency distribution for these texts. We also measure how the vocabulary increases with text size, as well as the respective degree distributions of the networks corresponding to every text, and compare them with a null "random" hypothesis. This null hypothesis consists of a set of texts constructed as follows: we select a text and remove all the spaces between words, then we reintroduce the spaces at random with the restriction that there cannot be a space next to another. We identify as words all strings of letters between consecutive spaces (the restriction avoids the possibility of having empty words). The reason we build the null hypothesis this way instead of the usual independent random letters with random spaces most commonly used BIBREF17, BIBREF31, is that consecutive letters are not independent: they are correlated to ensure word pronunciability, as well as due to spelling rules. Our method for constructing these random texts conserves most of the correlations between consecutive letters in a given language.
Next, we calculate the distribution of the clustering coefficients of the nodes of the adjacency network for each text. These distribution functions are more or less similar for all the texts of the same language, provided the networks are of the same size. However, it is apparent that the distributions are different between different languages. We also compare the clustering coefficient distributions with those of the null hypothesis. The data show that the strongest differences between languages occur for the fractions of nodes with clustering coefficients 0 and 1. We build a scatter plot for these fractions for all the texts in each language. Though there is overlap between some languages, other languages are clearly differentiated in the plot. We fit correlated bivariate gaussian distributions to the data of each language, which allows us to estimate a likelihood that a text is in a given language.
Texts and Universal laws
We analyzed 91 texts written in 7 languages: Spanish, English, German, French, Turkish, Russian and Icelandic. We also considered as null texts, 12 realizations of a randomized version the Portrait of Dorian Gray book, twice for each language analyzed here (except Icelandic). As mentioned above, the process for randomizing the text is as follows: first we remove the spaces in the original text. Then, we take the first letter, and with a probability of $1/2$ we add the next letter in the sequence, or the next letter in the sequence and a space. We advance to the last symbol added, and repeat the process until we reach the end of the text. This way we destroy the grammar of the original language, keeping the letter frequencies as well as most of the correlations between consecutive letters. The set of documents we used in this work are shown in Appendix SECREF8.
All texts were intervened to remove punctuation marks, numbers, parenthesis and other uncommon symbols, and all the letters were turned into lower case, so a word appearing with different case letters would not be counted as two different words. Also, we do not transliterate the texts, instead, we use the original symbols of the texts (Cyrillic alphabet for Russian texts or the special characters in Icelandic) using the UTF-8 encoding.
Also, since clustering coefficients depend non trivially on the size of the networks, we cut the texts so they all have essentially the same vocabulary size ($\simeq 11260$).
In table TABREF1 we summarize for each language, the averages of the length, vocabulary size, maximum frequency and number of hapax legomena (i.e. words that appear only once in a document or corpus) of the texts studied here. It is important to note that for different languages, very different text lengths are required to achieve the same vocabulary size. We also note that in all cases, hapax legomena represent approximately half of the vocabulary in each text.
In figure (FIGREF2) we show Zipf plots for some of the texts, including the random texts constructed as described previously. It is clear that all the texts reproduce convincingly Zipf's law: $f(n)\sim 1/n^\alpha $ where $n=1,2,...N_{tot}$ is the word rank, $N_{tot}$ is the size of the vocabulary and $f(n)$ is its frequency. This is in contrast to previous work in which it is argued that there are differences between the Zipf plots of texts and random sequencesBIBREF32, this might be due to the fact that our random text construction preserves correlations between letters, whereas the letters in BIBREF32 were placed independently. Our findings are summarized in Appendix (SECREF7).
Figure (FIGREF2) is the typical rank vs frequency plot for a randomly chosen text in each language. From the figure, we see that $\alpha \simeq 1$, obtained by least squares fits to the plot, describes very well all the texts. Therefore, given that $n/N_{tot}$ is the fraction of words with frequencies greater or equal to $f(n)$, then
where $p(f) \simeq 1/f^{\alpha _z}$ is the frequency distribution of the vocabulary. Now, if $f(n)\sim 1/n^\alpha $, then $p(f)\sim 1/f^{1+1/\alpha }$, i.e. $\alpha _z=1+1/\alpha $. Substituting $\alpha =1$, we have $\alpha _Z = 2$, which is in close agreement with what we observe. See figure (FIGREF5) and the tables in Appendix (SECREF7)
Figure (FIGREF6) shows the size of the vocabulary $V(L)$, as a function of the length $L$ of the text considered. Once again, all the texts, including the random texts, follow the Heaps-Herdan law $V(L)\sim L^{\beta }$ reasonably well. Again, the parameters describing the various texts are given in Appendix(SECREF7)
Continuing with the universal laws describing texts, in figure (FIGREF7) we show an example of the degree distribution for the adjacency network of the texts studied in this work. It is clear that except for the low odd degrees ($k=1,3,5,7$, see inset in fig.(FIGREF7)), the distribution is well described by a power law. The parameters corresponding to the texts are given in Appendix(SECREF7). As mentioned previously, this asymptotic behavior is a consequence of Zipf's law. If we assume that each time a word appears, the input degree $k_{in}$ (alternatively, the output degree $k_{out}$) of the corresponding node increases approximately by one, then the input degree could be expected to grow proportional to the frequency of each word. Further, in general we can expect that the total degree of a node to be $k\approx k_{in}+k_{out}\approx 2k_{in}$ (clearly this is not always true: for example, a word can appear twice, being preceded both times by the same word and followed by different words each time, leading to a degree $k=3$). Then, up to multiplicative factors, we can apply the same argument as in Equation DISPLAY_FORM4 for $\mathrm {p}(k)$, the degree distribution of the network, instead of $p(f)$ From this equation it again follows that if $f(n)\sim 1/n^\alpha $, then $\mathrm {p}(k)\sim 1/k^{1+1/\alpha }$, which is again in close agreement with what we observe.
Clustering coefficient
Thus far, our results confirm that the all our texts exhibit the expected universal statistics observed in natural languages. Actually, it could be argued that these laws may be "too universal", not being able to clearly distinguish texts written in real languages from our random texts. Further, all these laws appear to be consequence of Zipf's law, and this law reflects only the frequency of words, not their order. Thus, all three laws would still hold if the words of the texts were randomly shuffled. Clearly, shuffling the words destroys whatever relations may exist between successive words in a text, depending on the language in which it was written. This relation between successive words is what conveys meaning to a text. Thus, we expect that the clustering coefficient BIBREF26 of the adjacency network of each text,(constructed using words as nodes and linking those that are adjacent in the text), which depends strongly on the local structure, will distinguish between random texts and real texts, and even between texts in different languages.
The clustering coefficient $C_i(k_i)$ of node $i$ with degree $k_i$ is defined as the ratio of the number of links between node $i$'s neighbors over the total number of links that would be possible for this node $k_i(k_i-1)/2$. Thus, clearly, $0\le C_i(k_i)\le 1$. Hapax legomena, for example, mostly correspond to nodes with degree $k=2$, thus their clustering coefficient can only take the values 0 and 1 (degree $k=1$ is possible if the hapax appears followed and preceded by the same word, but these are rare occurrences). In general terms, the actual values of the clustering coefficients vary as a function of the size of the network BIBREF30, thus, in order to compare the clustering coefficients of networks corresponding to different texts, we have trimmed our texts so they all have approximately the same vocabulary size ($\simeq 11260$). In figure (FIGREF8) we show an example of the clustering coefficient as a function of $k$. There are many values $C(k)$ for each $k$ corresponding to the diverse nodes with the same degree. The red points in the graph denote the average clustering coefficient for each $k$, and the solid black line is the log-binning of this average.
Language differentiation
In order to quantify differences between languages, for each text we define the quantity $\nu (C)$ as
In figure (FIGREF10) we show $\nu (C)$ vs $C$ for Don Quixote in six different languages. From the graph it is clear that $\nu (0)$ and $\nu (1)$ show the largest degree of variation between the various languages, thus, we propose to focus on these two numbers to characterize the various languages.
In figure (FIGREF11) we show a scatter plot of $\nu (1)$ vs $\nu (0)$ for the texts in every language presented here. Using maximum likelihood estimators, we fit correlated bi-variate Gaussian distributions to the scatter plots of each language, the contour plots of which are also shown in the graph. First and most importantly, we can see in the figure that there is a clear distinction between languages and random texts. Also, we can see that languages tend to cluster in a way that is consistent with the known relationships among the languages. For example, in the figure we note that the contours corresponding to French and Spanish show a strong overlap, which might have been expected as they are closely related languages BIBREF35. On the other hand, Russian is far from French and Spanish. This suggest that these curves may be used as a quantitative aid for the classification of languages into families. For example, French and Spanish which are both Romance languages, appear closer to each other than to Russian and Turkish, which have different origins.
In order to test the validity our results, we calculate $\nu (0)$ and $\nu (1)$ for another set of books, (see tables in the appendix (SECREF8)) and using the fitted Gaussian distributions for each language, we calculated the probability that a text in each language would have those values, which allows us to assign a likelihood that a text is written in one or another language.
In table TABREF12 we can see, for example, that it is most likely that Smásögur I (Short stories in Icelandic) are written in Icelandic than in any of the languages analyzed, or that they are a random text.
Not surprisingly, it is not so easy to tell if Voltaire in French, is really written in French or in Spanish, likewise, it is not easy to tell if Moby Dick in Spanish is written in Spanish or French, and in both cases the maximum likelihood prediction fails. Nevertheless, it is clear that these books are not written in any of the other languages presented here, nor do they correspond to a random text. On the other hand, Twenty thousand leagues under the sea in Spanish and Les Miserables in French, are correctly identified, as well as all the other texts analyzed, including the random texts.
To try to pinpoint the origin of the differentiation between different languages, we note that an inspection of the nodes with $C=0$ and 1 reveals that they mainly consist of hapax legomena (as noted before, hapax legomena only have $C$ values of 0 and 1). To measure the relative importance of these words, we calculate the ratio of hapax legomena to the total number of words with $C=0$ and 1, we call this number $\nu ^{\prime }_{H}(C)$.
In Table TABREF13, we show the fraction of hapax legomena of the words with $C=0,1$ for several texts in English. A value close to 1 indicates that most of the nodes that contribute to $\nu ^{\prime }_H(C)$ are words that appear only once in the document. This indicates that the local structure around those words, i.e, the way that they relate in the adjacency network, is particular to each language, and seems to be a key for language differentiation.
In the Table TABREF14 we see the average of $\nu ^{\prime }_H(C)$ for each of the languages studied here. Note that for example the values are clearly different for Spanish and Turkish, similar for Spanish and French, and very different for all languages and random.
Conclusions
Zipf's law is one of the most universal statistics of natural languages. However, it may be too universal. While it may not strictly apply to sequences of independent random symbols with random spacings BIBREF32, it appears to describe random texts that conserve most of the correlations between successive symbols, as accurately as it describes texts written in real languages. Further, Heaps-Herdan law and the degree distribution of the adjacency network, appear to be consequences of Zipf's law, and are, thus, as universal.
In this work we studied 91 texts in seven different languages, as well as random texts constructed by randomizing the spacings between words without altering the order of the letters in the text. We find that they are all well described by the universal laws. However, we also found that the distribution of clustering coefficients of the networks of each text appears to vary from one language to another, and to distinguish random texts from real languages. The nodes that vary the most among the distributions of $C(k)$ are those for which $C(k)$ is equal to 0 or 1. We fit the scatter plot of these nodes to bivariate Gaussian distributions, which allows us to define the likelihood that a text is written in each given language. This method was very successful identifying the languages in which test were written, only failing to distinguish a couple of texts, confusing texts french and spanish, which have a strong overlap. In Table (TABREF12) we present the evidence that we can use the statistics of clustering coefficient to measure a sort of distance between languages.
Though hapax legomena account for most of the value $\nu (C)$ for $C=0$ and 1, we found that the fraction $\nu ^{\prime }_H(C)$ of hapax to other words is similar for French and Spanish, and different for Spanish and, say, Turkish. Further, $\nu ^{\prime }_H(C)$ is different between random texts and the languages we study. These observations might give some clue to the mechanism by which the clustering coefficient, and in particular the local structure around hapax legomena, helps to differentiate languages.
Unlike the work presented by Gamallo et. al BIBREF27, which is Corpus-based, our work uses a relatively small amount of texts. Also as we can see in tables presented in Appendix (SECREF7), the length of the texts we use is not necessarily the length of the complete work. Texts were cut at the appropriate length for all of them to have approximately the same vocabulary ($\simeq 11260$). Thus, actual lengths ranged from 368076 words for the Jane Austen books in English, to 26347 words for the text we called Turkish I. This is important not only for computational reasons, it may also be important for studies of the relation between languages for which large corpora do not exist, something very common in the linguistic studies of the indigenous languages. The method proposed in this work can be useful in such cases, as small texts trimmed to fill some appropriate vocabulary size is the only necessary ingredient.
Acknowledgments
Diego Espitia acknowledges financial support through a doctoral scholarship from Consejo Nacional de Ciencia y Tecnología (CONACyT).
Tables and Results
In this appendix we present tables of results for the data analyzed in this work. Here $\alpha _k$ and $\sigma _k$ represent the exponent and standard error of the power law for the degree distribution of the co-occurrence networks $p(k) \propto 1/k^{\alpha _k} $, for $k> k_{min}$, where $k_{min}$ is the smallest degree for which the power law holds. Similarly, $\alpha _Z$ and $\sigma _z$ represents the exponent and standard error of the distribution of frequencies $p(f)\propto 1/f^{\alpha _z}$; for $f > f_{min}$ where now $f_{min}$ is the smallest frequency for which the power law is satisfied. The values of the Heap's law $\beta $ and $\sigma _h$ were obtained via least square fitting.
For the estimation of the parameters we use the Maximum Likelihood Estimation (MLE) method for discerning and quantifying power-law behavior in empirical data BIBREF36. The MLE works as follows: assuming that the data fits a power law, we estimate $\alpha $ via
where $x_i > x_{min}^*$ for $i=1,...N$ and using as $x^*_{min}$ each element of the data set $\lbrace x\rbrace $. Then, using the Kolmogorov–Smirnov test we find the distance $D$ between the cumulative distribution of the data set and the cumulative distribution $P_{(x^*_{min},\alpha ^*)}(x)$. From these set of distances, we find the value which minimizes $D$, this $x_{min}$, is the smallest data for which the power law holds, and can be used to determine the parameter of the power law $\hat{\alpha }$. In order to perform a goodness of the fit test, we construct 1000 synthetic data, using the previous $\hat{\alpha }$ and $x_{min}$. Now we can count the fraction of the synthetic distances that are larger than the distance obtained from the data. This fraction is known as p-value If this p-value$>0.1$, then the difference between the data set and the model can be attributed to statistical fluctuations alone; if it is small, the model is not a plausible fit to the data.BIBREF36
Spanish
English
French
German
Turkish
Russian
Icelandic
Random
Texts used
Here we present the text used in this work. The vast majority of the texts were obtained from the Gutemberg project, except for the texts in Russian, Turkish and Icelandic, which were obtained from other sources.
@ll@ 2cIcelandic
Torfhildi Hólm Brynjólfur Biskup Sveinsson
Sagas I
Sagas II
Sagas III
Sagas IV
Sagas V
Sagas VI
Sagas VII
Jón Trausti
Jón Thoroddsen Maður Og Kona
Þorgils Gjallanda
Smásögur I
Smásögur II
Source: All sagas were obtained from https://sagadb.org/. The other texts were obtained from https://www.snerpa.is/net/index.html | Zipf's law describes change of word frequency rate, while Heaps-Herdan describes different word number in large texts (assumed that Hepas-Herdan is consequence of Zipf's) |
aaec98481defc4c230f84a64cdcf793d89081a76 | aaec98481defc4c230f84a64cdcf793d89081a76_0 | Q: What was the best performing baseline?
Text: Introduction
The goal of text summarization task is to produce a summary from a set of documents. The summary should retain important information and be reasonably shorter than the original documents BIBREF0 . When the set of documents contains only a single document, the task is usually referred to as single-document summarization. There are two kinds of summarization characterized by how the summary is produced: extractive and abstractive. Extractive summarization attempts to extract few important sentences verbatim from the original document. In contrast, abstractive summarization tries to produce an abstract which may contain sentences that do not exist in or are paraphrased from the original document.
Despite quite a few number of research on Indonesian text summarization, none of them were trained nor evaluated on a large, publicly available dataset. Also, although ROUGE BIBREF1 is the standard intrinsic evaluation metric for English text summarization, for Indonesian it does not seem so. Previous works rarely state explicitly that their evaluation was performed with ROUGE. The lack of a benchmark dataset and the different evaluation metrics make comparing among Indonesian text summarization research difficult.
In this work, we introduce IndoSum, a new benchmark dataset for Indonesian text summarization, and evaluated several well-known extractive single-document summarization methods on the dataset. The dataset consists of online news articles and has almost 200 times more documents than the next largest one of the same domain BIBREF2 . To encourage further research in this area, we make our dataset publicly available. In short, the contribution of this work is two-fold:
The state-of-the-art result on the dataset, although impressive, is still significantly lower than the maximum possible ROUGE score. This result suggests that the dataset is sufficiently challenging to be used as evaluation benchmark for future research on Indonesian text summarization.
Related work
Fachrurrozi et al. BIBREF3 proposed some scoring methods and used them with TF-IDF to rank and summarize news articles. Another work BIBREF4 used latent Dirichlet allocation coupled with genetic algorithm to produce summaries for online news articles. Simple methods like naive Bayes has also been used for Indonesian news summarization BIBREF2 , although for English, naive Bayes has been used almost two decades earlier BIBREF5 . A more recent work BIBREF6 employed a summarization algorithm called TextTeaser with some predefined features for news articles as well. Slamet et al. BIBREF7 used TF-IDF to convert sentences into vectors, and their similarities are then computed against another vector obtained from some keywords. They used these similarity scores to extract important sentences as the summary. Unfortunately, all these work do not seem to be evaluated using ROUGE, despite being the standard metric for text summarization research.
An example of Indonesian text summarization research which used ROUGE is BIBREF8 . They employed the best method on TAC 2011 competition for news dataset and achieved ROUGE-2 scores that are close to that of humans. However, their dataset consists of only 56 articles which is very small, and the dataset is not available publicly.
An attempt to make a public summarization dataset has been done in BIBREF9 . They compiled a chat dataset along with its summary, which has both the extractive and abstractive versions. This work is a good step toward standardizing summarization research for Indonesian. However, to the best of our knowledge, for news dataset, there has not been a publicly available dataset, let alone a standard.
IndoSum: a new benchmark dataset
We used a dataset provided by Shortir, an Indonesian news aggregator and summarizer company. The dataset contains roughly 20K news articles. Each article has the title, category, source (e.g., CNN Indonesia, Kumparan), URL to the original article, and an abstractive summary which was created manually by a total of 2 native speakers of Indonesian. There are 6 categories in total: Entertainment, Inspiration, Sport, Showbiz, Headline, and Tech. A sample article-summary pair is shown in Fig. FIGREF4 .
Note that 20K articles are actually quite small if we compare to English CNN/DailyMail dataset used in BIBREF11 which has 200K articles. Therefore, we used 5-fold cross-validation to split the dataset into 5 folds of training, development, and testing set. We preprocessed the dataset by tokenizing, lowercasing, removing punctuations, and replacing digits with zeros. We used NLTK BIBREF12 and spaCy for sentence and word tokenization respectively.
In our exploratory analysis, we discovered that some articles have a very long text and some summaries have too many sentences. Articles with a long text are mostly articles containing a list, e.g., list of songs played in a concert, list of award nominations, and so on. Since such a list is never included in the summary, we truncated such articles so that the number of paragraphs are at most two standard deviations away from the mean. For each fold, the mean and standard deviation were estimated from the training set. We discarded articles whose summary is too long since we do not want lengthy summaries anyway. The cutoff length is defined by the upper limit of the Tukey's boxplot, where for each fold, the quartiles were estimated from the training set. After removing such articles, we ended up with roughly 19K articles in total. The complete statistics of the corpus is shown in Table TABREF5 .
Since the gold summaries provided by Shortir are abstractive, we needed to label the sentences in the article for training the supervised extractive summarizers. We followed Nallapati et al. BIBREF10 to make these labeled sentences (called oracles hereinafter) using their greedy algorithm. The idea is to maximize the ROUGE score between the labeled sentences and the abstractive gold summary. Although the provided gold summaries are abstractive, in this work we focused on extractive summarization because we think research on this area are more mature, especially for Indonesian, and thus starting with extractive summarization is a logical first step toward standardizing Indonesian text summarization research.
Since there can be many valid summaries for a given article, having only a single abstractive summary for an article is a limitation of our dataset which we acknowledge. Nevertheless, we feel that the existence of such dataset is a crucial step toward a fair benchmark for Indonesian text summarization research. Therefore, we make the dataset publicly available for others to use.
Evaluation
For evaluation, we used ROUGE BIBREF1 , a standard metric for text summarization. We used the implementation provided by pythonrouge. Following BIBREF11 , we report the INLINEFORM0 score of R-1, R-2, and R-L. Intuitively, R-1 and R-2 measure informativeness and R-L measures fluency BIBREF11 . We report the INLINEFORM1 score instead of just the recall score because although we extract a fixed number of sentences as the summary, the number of words are not limited. So, reporting only recall benefits models which extract long sentences.
Compared methods
We compared several summarization methods which can be categorized into three groups: unsupervised, non-neural supervised, and neural supervised methods. For the unsupervised methods, we tested:
SumBasic, which uses word frequency to rank sentences and selects top sentences as the summary BIBREF13 , BIBREF14 .
Lsa, which uses latent semantic analysis (LSA) to decompose the term-by-sentence matrix of a document and extracts sentences based on the result. We experimented with the two approaches proposed in BIBREF15 and BIBREF16 respectively.
LexRank, which constructs a graph representation of a document, where nodes are sentences and edges represent similarity between two sentences, and runs PageRank algorithm on that graph and extracts sentences based on the resulting PageRank values BIBREF17 . In the original implementation, sentences shorter than a certain threshold are removed. Our implementation does not do this removal to reduce the number of tunable hyperparameters. Also, it originally uses cross-sentence informational subsumption (CSIS) during sentence selection stage but the paper does not explain it well. Instead, we used an approximation to CSIS called cross-sentence word overlap described in BIBREF18 by the same authors.
TextRank, which is very similar to LexRank but computes sentence similarity based on the number of common tokens BIBREF19 .
For the non-neural supervised methods, we compared:
Bayes, which represents each sentence as a feature vector and uses naive Bayes to classify them BIBREF5 . The original paper computes TF-IDF score on multi-word tokens that are identified automatically using mutual information. We did not do this identification, so our TF-IDF computation operates on word tokens.
Hmm, which uses hidden Markov model where states correspond to whether the sentence should be extracted BIBREF20 . The original work uses QR decomposition for sentence selection but our implementation does not. We simply ranked the sentences by their scores and picked the top 3 as the summary.
MaxEnt, which represents each sentence as a feature vector and leverages maximum entropy model to compute the probability of a sentence should be extracted BIBREF21 . The original approach puts a prior distribution over the labels but we put the prior on the weights instead. Our implementation still agrees with the original because we employed a bias feature which should be able to learn the prior label distribution.
As for the neural supervised method, we evaluated NeuralSum BIBREF11 using the original implementation by the authors. We modified their implementation slightly to allow for evaluating the model with ROUGE. Note that all the methods are extractive. Our implementation code for all the methods above is available online.
As a baseline, we used Lead-N which selects INLINEFORM0 leading sentences as the summary. For all methods, we extracted 3 sentences as the summary since it is the median number of sentences in the gold summaries that we found in our exploratory analysis.
Experiment setup
Some of these approaches optionally require precomputed term frequency (TF) or inverse document frequency (IDF) table and a stopword list. We precomputed the TF and IDF tables from Indonesian Wikipedia dump data and used the stopword list provided in BIBREF22 . Hyperparameters were tuned to the development set of each fold, optimizing for R-1 as it correlates best with human judgment BIBREF23 . For NeuralSum, we tried several scenarios:
tuning the dropout rate while keeping other hyperparameters fixed,
increasing the word embedding size from the default 50 to 300,
initializing the word embedding with FastText pre-trained embedding BIBREF24 .
Scenario 2 is necessary to determine whether any improvement in scenario 3 is due to the larger embedding size or the pre-trained embedding. In scenario 2 and 3, we used the default hyperparameter setting from the authors' implementation. In addition, for every scenario, we picked the model saved at an epoch that yields the best R-1 score on the development set.
Overall results
Table TABREF26 shows the test INLINEFORM0 score of ROUGE-1, ROUGE-2, and ROUGE-L of all the tested models described previously. The mean and standard deviation (bracketed) of the scores are computed over the 5 folds. We put the score obtained by an oracle summarizer as Oracle. Its summaries are obtained by using the true labels. This oracle summarizer acts as the upper bound of an extractive summarizer on our dataset. As we can see, in general, every scenario of NeuralSum consistently outperforms the other models significantly. The best scenario is NeuralSum with word embedding size of 300, although its ROUGE scores are still within one standard deviation of NeuralSum with the default word embedding size. Lead-3 baseline performs really well and outperforms almost all the other models, which is not surprising and even consistent with other work that for news summarization, Lead-N baseline is surprisingly hard to beat. Slightly lower than Lead-3 are LexRank and Bayes, but their scores are still within one standard deviation of each other so their performance are on par. This result suggests that a non-neural supervised summarizer is not better than an unsupervised one, and thus if labeled data are available, it might be best to opt for a neural summarizer right away. We also want to note that despite its high ROUGE, every NeuralSum scenario scores are still considerably lower than Oracle, hinting that it can be improved further. Moreover, initializing with FastText pre-trained embedding slightly lowers the scores, although they are still within one standard deviation. This finding suggests that the effect of FastText pre-trained embedding is unclear for our case.
Out-of-domain results
Since Indonesian is a low-resource language, collecting in-domain dataset for any task (including summarization) can be difficult. Therefore, we experimented with out-of-domain scenario to see if NeuralSum can be used easily for a new use case for which the dataset is scarce or non-existent. Concretely, we trained the best NeuralSum (with word embedding size of 300) on articles belonging to category INLINEFORM0 and evaluated its performance on articles belonging to category INLINEFORM1 for all categories INLINEFORM2 and INLINEFORM3 . As we have a total of 6 categories, we have 36 domain pairs to experiment on. To reduce computational cost, we used only the articles from the first fold and did not tune any hyperparameters. We note that this decision might undermine the generalizability of conclusions drawn from these out-of-domain experiments. Nonetheless, we feel that the results can still be a useful guidance for future work. As comparisons, we also evaluated Lead-3, Oracle, and the best unsupervised method, LexRank. For LexRank, we used the best hyperparameter that we found in the previous experiment for the first fold. We only report the ROUGE-1 scores. Table TABREF27 shows the result of this experiment.
We see that almost all the results outperform the Lead-3 baseline, which means that for out-of-domain cases, NeuralSum can summarize not just by selecting some leading sentences from the original text. Almost all NeuralSum results also outperform LexRank, suggesting that when there is no in-domain training data, training NeuralSum on out-of-domain data may yield better performance than using an unsupervised model like LexRank. Looking at the best results, we observe that they all are the out-of-domain cases. In other words, training on out-of-domain data is surprisingly better than on in-domain data. For example, for Sport as the target domain, the best model is trained on Headline as the source domain. In fact, using Headline as the source domain yields the best result in 3 out of 6 target domains. We suspect that this phenomenon is because of the similarity between the corpus of the two domain. Specifically, training on Headline yields the best result most of the time because news from any domain can be headlines. Further investigation on this issue might leverage domain similarity metrics proposed in BIBREF25 . Next, comparing the best NeuralSum performance on each target domain to Oracle, we still see quite a large gap. This gap hints that NeuralSum can still be improved further, probably by lifting the limitations of our experiment setup (e.g., tuning the hyperparameters for each domain pair).
Conclusion and future work
We present IndoSum, a new benchmark dataset for Indonesian text summarization, and evaluated state-of-the-art extractive summarization methods on the dataset. We tested unsupervised, non-neural supervised, and neural supervised summarization methods. We used ROUGE as the evaluation metric because it is the standard intrinsic evaluation metric for text summarization evaluation. Our results show that neural models outperform non-neural ones and in absence of in-domain corpus, training on out-of-domain one seems to yield better performance instead of using an unsupervised summarizer. Also, we found that the best performing model achieves ROUGE scores that are still significantly lower than the maximum possible scores, which suggests that the dataset is sufficiently challenging for future work. The dataset, which consists of 19K article-summary pairs, is publicly available. We hope that the dataset and the evaluation results can serve as a benchmark for future research on Indonesian text summarization.
Future work in this area may focus on improving the summarizer performance by employing newer neural models such as SummaRuNNer BIBREF10 or incorporating side information BIBREF26 . Since the gold summaries are abstractive, abstractive summarization techniques such as attention-based neural models BIBREF27 , seq2seq models BIBREF28 , pointer networks BIBREF29 , or reinforcement learning-based approach BIBREF30 can also be interesting directions for future avenue. Other tasks such as further investigation on the out-of-domain issue, human evaluation, or even extending the corpus to include more than one summary per article are worth exploring as well. | Lead-3 |
69b41524dc5820143e45f2f3545cd5c0a70e2922 | 69b41524dc5820143e45f2f3545cd5c0a70e2922_0 | Q: Which approaches did they use?
Text: Introduction
The goal of text summarization task is to produce a summary from a set of documents. The summary should retain important information and be reasonably shorter than the original documents BIBREF0 . When the set of documents contains only a single document, the task is usually referred to as single-document summarization. There are two kinds of summarization characterized by how the summary is produced: extractive and abstractive. Extractive summarization attempts to extract few important sentences verbatim from the original document. In contrast, abstractive summarization tries to produce an abstract which may contain sentences that do not exist in or are paraphrased from the original document.
Despite quite a few number of research on Indonesian text summarization, none of them were trained nor evaluated on a large, publicly available dataset. Also, although ROUGE BIBREF1 is the standard intrinsic evaluation metric for English text summarization, for Indonesian it does not seem so. Previous works rarely state explicitly that their evaluation was performed with ROUGE. The lack of a benchmark dataset and the different evaluation metrics make comparing among Indonesian text summarization research difficult.
In this work, we introduce IndoSum, a new benchmark dataset for Indonesian text summarization, and evaluated several well-known extractive single-document summarization methods on the dataset. The dataset consists of online news articles and has almost 200 times more documents than the next largest one of the same domain BIBREF2 . To encourage further research in this area, we make our dataset publicly available. In short, the contribution of this work is two-fold:
The state-of-the-art result on the dataset, although impressive, is still significantly lower than the maximum possible ROUGE score. This result suggests that the dataset is sufficiently challenging to be used as evaluation benchmark for future research on Indonesian text summarization.
Related work
Fachrurrozi et al. BIBREF3 proposed some scoring methods and used them with TF-IDF to rank and summarize news articles. Another work BIBREF4 used latent Dirichlet allocation coupled with genetic algorithm to produce summaries for online news articles. Simple methods like naive Bayes has also been used for Indonesian news summarization BIBREF2 , although for English, naive Bayes has been used almost two decades earlier BIBREF5 . A more recent work BIBREF6 employed a summarization algorithm called TextTeaser with some predefined features for news articles as well. Slamet et al. BIBREF7 used TF-IDF to convert sentences into vectors, and their similarities are then computed against another vector obtained from some keywords. They used these similarity scores to extract important sentences as the summary. Unfortunately, all these work do not seem to be evaluated using ROUGE, despite being the standard metric for text summarization research.
An example of Indonesian text summarization research which used ROUGE is BIBREF8 . They employed the best method on TAC 2011 competition for news dataset and achieved ROUGE-2 scores that are close to that of humans. However, their dataset consists of only 56 articles which is very small, and the dataset is not available publicly.
An attempt to make a public summarization dataset has been done in BIBREF9 . They compiled a chat dataset along with its summary, which has both the extractive and abstractive versions. This work is a good step toward standardizing summarization research for Indonesian. However, to the best of our knowledge, for news dataset, there has not been a publicly available dataset, let alone a standard.
IndoSum: a new benchmark dataset
We used a dataset provided by Shortir, an Indonesian news aggregator and summarizer company. The dataset contains roughly 20K news articles. Each article has the title, category, source (e.g., CNN Indonesia, Kumparan), URL to the original article, and an abstractive summary which was created manually by a total of 2 native speakers of Indonesian. There are 6 categories in total: Entertainment, Inspiration, Sport, Showbiz, Headline, and Tech. A sample article-summary pair is shown in Fig. FIGREF4 .
Note that 20K articles are actually quite small if we compare to English CNN/DailyMail dataset used in BIBREF11 which has 200K articles. Therefore, we used 5-fold cross-validation to split the dataset into 5 folds of training, development, and testing set. We preprocessed the dataset by tokenizing, lowercasing, removing punctuations, and replacing digits with zeros. We used NLTK BIBREF12 and spaCy for sentence and word tokenization respectively.
In our exploratory analysis, we discovered that some articles have a very long text and some summaries have too many sentences. Articles with a long text are mostly articles containing a list, e.g., list of songs played in a concert, list of award nominations, and so on. Since such a list is never included in the summary, we truncated such articles so that the number of paragraphs are at most two standard deviations away from the mean. For each fold, the mean and standard deviation were estimated from the training set. We discarded articles whose summary is too long since we do not want lengthy summaries anyway. The cutoff length is defined by the upper limit of the Tukey's boxplot, where for each fold, the quartiles were estimated from the training set. After removing such articles, we ended up with roughly 19K articles in total. The complete statistics of the corpus is shown in Table TABREF5 .
Since the gold summaries provided by Shortir are abstractive, we needed to label the sentences in the article for training the supervised extractive summarizers. We followed Nallapati et al. BIBREF10 to make these labeled sentences (called oracles hereinafter) using their greedy algorithm. The idea is to maximize the ROUGE score between the labeled sentences and the abstractive gold summary. Although the provided gold summaries are abstractive, in this work we focused on extractive summarization because we think research on this area are more mature, especially for Indonesian, and thus starting with extractive summarization is a logical first step toward standardizing Indonesian text summarization research.
Since there can be many valid summaries for a given article, having only a single abstractive summary for an article is a limitation of our dataset which we acknowledge. Nevertheless, we feel that the existence of such dataset is a crucial step toward a fair benchmark for Indonesian text summarization research. Therefore, we make the dataset publicly available for others to use.
Evaluation
For evaluation, we used ROUGE BIBREF1 , a standard metric for text summarization. We used the implementation provided by pythonrouge. Following BIBREF11 , we report the INLINEFORM0 score of R-1, R-2, and R-L. Intuitively, R-1 and R-2 measure informativeness and R-L measures fluency BIBREF11 . We report the INLINEFORM1 score instead of just the recall score because although we extract a fixed number of sentences as the summary, the number of words are not limited. So, reporting only recall benefits models which extract long sentences.
Compared methods
We compared several summarization methods which can be categorized into three groups: unsupervised, non-neural supervised, and neural supervised methods. For the unsupervised methods, we tested:
SumBasic, which uses word frequency to rank sentences and selects top sentences as the summary BIBREF13 , BIBREF14 .
Lsa, which uses latent semantic analysis (LSA) to decompose the term-by-sentence matrix of a document and extracts sentences based on the result. We experimented with the two approaches proposed in BIBREF15 and BIBREF16 respectively.
LexRank, which constructs a graph representation of a document, where nodes are sentences and edges represent similarity between two sentences, and runs PageRank algorithm on that graph and extracts sentences based on the resulting PageRank values BIBREF17 . In the original implementation, sentences shorter than a certain threshold are removed. Our implementation does not do this removal to reduce the number of tunable hyperparameters. Also, it originally uses cross-sentence informational subsumption (CSIS) during sentence selection stage but the paper does not explain it well. Instead, we used an approximation to CSIS called cross-sentence word overlap described in BIBREF18 by the same authors.
TextRank, which is very similar to LexRank but computes sentence similarity based on the number of common tokens BIBREF19 .
For the non-neural supervised methods, we compared:
Bayes, which represents each sentence as a feature vector and uses naive Bayes to classify them BIBREF5 . The original paper computes TF-IDF score on multi-word tokens that are identified automatically using mutual information. We did not do this identification, so our TF-IDF computation operates on word tokens.
Hmm, which uses hidden Markov model where states correspond to whether the sentence should be extracted BIBREF20 . The original work uses QR decomposition for sentence selection but our implementation does not. We simply ranked the sentences by their scores and picked the top 3 as the summary.
MaxEnt, which represents each sentence as a feature vector and leverages maximum entropy model to compute the probability of a sentence should be extracted BIBREF21 . The original approach puts a prior distribution over the labels but we put the prior on the weights instead. Our implementation still agrees with the original because we employed a bias feature which should be able to learn the prior label distribution.
As for the neural supervised method, we evaluated NeuralSum BIBREF11 using the original implementation by the authors. We modified their implementation slightly to allow for evaluating the model with ROUGE. Note that all the methods are extractive. Our implementation code for all the methods above is available online.
As a baseline, we used Lead-N which selects INLINEFORM0 leading sentences as the summary. For all methods, we extracted 3 sentences as the summary since it is the median number of sentences in the gold summaries that we found in our exploratory analysis.
Experiment setup
Some of these approaches optionally require precomputed term frequency (TF) or inverse document frequency (IDF) table and a stopword list. We precomputed the TF and IDF tables from Indonesian Wikipedia dump data and used the stopword list provided in BIBREF22 . Hyperparameters were tuned to the development set of each fold, optimizing for R-1 as it correlates best with human judgment BIBREF23 . For NeuralSum, we tried several scenarios:
tuning the dropout rate while keeping other hyperparameters fixed,
increasing the word embedding size from the default 50 to 300,
initializing the word embedding with FastText pre-trained embedding BIBREF24 .
Scenario 2 is necessary to determine whether any improvement in scenario 3 is due to the larger embedding size or the pre-trained embedding. In scenario 2 and 3, we used the default hyperparameter setting from the authors' implementation. In addition, for every scenario, we picked the model saved at an epoch that yields the best R-1 score on the development set.
Overall results
Table TABREF26 shows the test INLINEFORM0 score of ROUGE-1, ROUGE-2, and ROUGE-L of all the tested models described previously. The mean and standard deviation (bracketed) of the scores are computed over the 5 folds. We put the score obtained by an oracle summarizer as Oracle. Its summaries are obtained by using the true labels. This oracle summarizer acts as the upper bound of an extractive summarizer on our dataset. As we can see, in general, every scenario of NeuralSum consistently outperforms the other models significantly. The best scenario is NeuralSum with word embedding size of 300, although its ROUGE scores are still within one standard deviation of NeuralSum with the default word embedding size. Lead-3 baseline performs really well and outperforms almost all the other models, which is not surprising and even consistent with other work that for news summarization, Lead-N baseline is surprisingly hard to beat. Slightly lower than Lead-3 are LexRank and Bayes, but their scores are still within one standard deviation of each other so their performance are on par. This result suggests that a non-neural supervised summarizer is not better than an unsupervised one, and thus if labeled data are available, it might be best to opt for a neural summarizer right away. We also want to note that despite its high ROUGE, every NeuralSum scenario scores are still considerably lower than Oracle, hinting that it can be improved further. Moreover, initializing with FastText pre-trained embedding slightly lowers the scores, although they are still within one standard deviation. This finding suggests that the effect of FastText pre-trained embedding is unclear for our case.
Out-of-domain results
Since Indonesian is a low-resource language, collecting in-domain dataset for any task (including summarization) can be difficult. Therefore, we experimented with out-of-domain scenario to see if NeuralSum can be used easily for a new use case for which the dataset is scarce or non-existent. Concretely, we trained the best NeuralSum (with word embedding size of 300) on articles belonging to category INLINEFORM0 and evaluated its performance on articles belonging to category INLINEFORM1 for all categories INLINEFORM2 and INLINEFORM3 . As we have a total of 6 categories, we have 36 domain pairs to experiment on. To reduce computational cost, we used only the articles from the first fold and did not tune any hyperparameters. We note that this decision might undermine the generalizability of conclusions drawn from these out-of-domain experiments. Nonetheless, we feel that the results can still be a useful guidance for future work. As comparisons, we also evaluated Lead-3, Oracle, and the best unsupervised method, LexRank. For LexRank, we used the best hyperparameter that we found in the previous experiment for the first fold. We only report the ROUGE-1 scores. Table TABREF27 shows the result of this experiment.
We see that almost all the results outperform the Lead-3 baseline, which means that for out-of-domain cases, NeuralSum can summarize not just by selecting some leading sentences from the original text. Almost all NeuralSum results also outperform LexRank, suggesting that when there is no in-domain training data, training NeuralSum on out-of-domain data may yield better performance than using an unsupervised model like LexRank. Looking at the best results, we observe that they all are the out-of-domain cases. In other words, training on out-of-domain data is surprisingly better than on in-domain data. For example, for Sport as the target domain, the best model is trained on Headline as the source domain. In fact, using Headline as the source domain yields the best result in 3 out of 6 target domains. We suspect that this phenomenon is because of the similarity between the corpus of the two domain. Specifically, training on Headline yields the best result most of the time because news from any domain can be headlines. Further investigation on this issue might leverage domain similarity metrics proposed in BIBREF25 . Next, comparing the best NeuralSum performance on each target domain to Oracle, we still see quite a large gap. This gap hints that NeuralSum can still be improved further, probably by lifting the limitations of our experiment setup (e.g., tuning the hyperparameters for each domain pair).
Conclusion and future work
We present IndoSum, a new benchmark dataset for Indonesian text summarization, and evaluated state-of-the-art extractive summarization methods on the dataset. We tested unsupervised, non-neural supervised, and neural supervised summarization methods. We used ROUGE as the evaluation metric because it is the standard intrinsic evaluation metric for text summarization evaluation. Our results show that neural models outperform non-neural ones and in absence of in-domain corpus, training on out-of-domain one seems to yield better performance instead of using an unsupervised summarizer. Also, we found that the best performing model achieves ROUGE scores that are still significantly lower than the maximum possible scores, which suggests that the dataset is sufficiently challenging for future work. The dataset, which consists of 19K article-summary pairs, is publicly available. We hope that the dataset and the evaluation results can serve as a benchmark for future research on Indonesian text summarization.
Future work in this area may focus on improving the summarizer performance by employing newer neural models such as SummaRuNNer BIBREF10 or incorporating side information BIBREF26 . Since the gold summaries are abstractive, abstractive summarization techniques such as attention-based neural models BIBREF27 , seq2seq models BIBREF28 , pointer networks BIBREF29 , or reinforcement learning-based approach BIBREF30 can also be interesting directions for future avenue. Other tasks such as further investigation on the out-of-domain issue, human evaluation, or even extending the corpus to include more than one summary per article are worth exploring as well. | SumBasic, Lsa, LexRank, TextRank, Bayes, Hmm, MaxEnt, NeuralSum, Lead-N |
72122e0bc5da1d07c0dadb3401aab2acd748424d | 72122e0bc5da1d07c0dadb3401aab2acd748424d_0 | Q: What is the size of the dataset?
Text: Introduction
The goal of text summarization task is to produce a summary from a set of documents. The summary should retain important information and be reasonably shorter than the original documents BIBREF0 . When the set of documents contains only a single document, the task is usually referred to as single-document summarization. There are two kinds of summarization characterized by how the summary is produced: extractive and abstractive. Extractive summarization attempts to extract few important sentences verbatim from the original document. In contrast, abstractive summarization tries to produce an abstract which may contain sentences that do not exist in or are paraphrased from the original document.
Despite quite a few number of research on Indonesian text summarization, none of them were trained nor evaluated on a large, publicly available dataset. Also, although ROUGE BIBREF1 is the standard intrinsic evaluation metric for English text summarization, for Indonesian it does not seem so. Previous works rarely state explicitly that their evaluation was performed with ROUGE. The lack of a benchmark dataset and the different evaluation metrics make comparing among Indonesian text summarization research difficult.
In this work, we introduce IndoSum, a new benchmark dataset for Indonesian text summarization, and evaluated several well-known extractive single-document summarization methods on the dataset. The dataset consists of online news articles and has almost 200 times more documents than the next largest one of the same domain BIBREF2 . To encourage further research in this area, we make our dataset publicly available. In short, the contribution of this work is two-fold:
The state-of-the-art result on the dataset, although impressive, is still significantly lower than the maximum possible ROUGE score. This result suggests that the dataset is sufficiently challenging to be used as evaluation benchmark for future research on Indonesian text summarization.
Related work
Fachrurrozi et al. BIBREF3 proposed some scoring methods and used them with TF-IDF to rank and summarize news articles. Another work BIBREF4 used latent Dirichlet allocation coupled with genetic algorithm to produce summaries for online news articles. Simple methods like naive Bayes has also been used for Indonesian news summarization BIBREF2 , although for English, naive Bayes has been used almost two decades earlier BIBREF5 . A more recent work BIBREF6 employed a summarization algorithm called TextTeaser with some predefined features for news articles as well. Slamet et al. BIBREF7 used TF-IDF to convert sentences into vectors, and their similarities are then computed against another vector obtained from some keywords. They used these similarity scores to extract important sentences as the summary. Unfortunately, all these work do not seem to be evaluated using ROUGE, despite being the standard metric for text summarization research.
An example of Indonesian text summarization research which used ROUGE is BIBREF8 . They employed the best method on TAC 2011 competition for news dataset and achieved ROUGE-2 scores that are close to that of humans. However, their dataset consists of only 56 articles which is very small, and the dataset is not available publicly.
An attempt to make a public summarization dataset has been done in BIBREF9 . They compiled a chat dataset along with its summary, which has both the extractive and abstractive versions. This work is a good step toward standardizing summarization research for Indonesian. However, to the best of our knowledge, for news dataset, there has not been a publicly available dataset, let alone a standard.
IndoSum: a new benchmark dataset
We used a dataset provided by Shortir, an Indonesian news aggregator and summarizer company. The dataset contains roughly 20K news articles. Each article has the title, category, source (e.g., CNN Indonesia, Kumparan), URL to the original article, and an abstractive summary which was created manually by a total of 2 native speakers of Indonesian. There are 6 categories in total: Entertainment, Inspiration, Sport, Showbiz, Headline, and Tech. A sample article-summary pair is shown in Fig. FIGREF4 .
Note that 20K articles are actually quite small if we compare to English CNN/DailyMail dataset used in BIBREF11 which has 200K articles. Therefore, we used 5-fold cross-validation to split the dataset into 5 folds of training, development, and testing set. We preprocessed the dataset by tokenizing, lowercasing, removing punctuations, and replacing digits with zeros. We used NLTK BIBREF12 and spaCy for sentence and word tokenization respectively.
In our exploratory analysis, we discovered that some articles have a very long text and some summaries have too many sentences. Articles with a long text are mostly articles containing a list, e.g., list of songs played in a concert, list of award nominations, and so on. Since such a list is never included in the summary, we truncated such articles so that the number of paragraphs are at most two standard deviations away from the mean. For each fold, the mean and standard deviation were estimated from the training set. We discarded articles whose summary is too long since we do not want lengthy summaries anyway. The cutoff length is defined by the upper limit of the Tukey's boxplot, where for each fold, the quartiles were estimated from the training set. After removing such articles, we ended up with roughly 19K articles in total. The complete statistics of the corpus is shown in Table TABREF5 .
Since the gold summaries provided by Shortir are abstractive, we needed to label the sentences in the article for training the supervised extractive summarizers. We followed Nallapati et al. BIBREF10 to make these labeled sentences (called oracles hereinafter) using their greedy algorithm. The idea is to maximize the ROUGE score between the labeled sentences and the abstractive gold summary. Although the provided gold summaries are abstractive, in this work we focused on extractive summarization because we think research on this area are more mature, especially for Indonesian, and thus starting with extractive summarization is a logical first step toward standardizing Indonesian text summarization research.
Since there can be many valid summaries for a given article, having only a single abstractive summary for an article is a limitation of our dataset which we acknowledge. Nevertheless, we feel that the existence of such dataset is a crucial step toward a fair benchmark for Indonesian text summarization research. Therefore, we make the dataset publicly available for others to use.
Evaluation
For evaluation, we used ROUGE BIBREF1 , a standard metric for text summarization. We used the implementation provided by pythonrouge. Following BIBREF11 , we report the INLINEFORM0 score of R-1, R-2, and R-L. Intuitively, R-1 and R-2 measure informativeness and R-L measures fluency BIBREF11 . We report the INLINEFORM1 score instead of just the recall score because although we extract a fixed number of sentences as the summary, the number of words are not limited. So, reporting only recall benefits models which extract long sentences.
Compared methods
We compared several summarization methods which can be categorized into three groups: unsupervised, non-neural supervised, and neural supervised methods. For the unsupervised methods, we tested:
SumBasic, which uses word frequency to rank sentences and selects top sentences as the summary BIBREF13 , BIBREF14 .
Lsa, which uses latent semantic analysis (LSA) to decompose the term-by-sentence matrix of a document and extracts sentences based on the result. We experimented with the two approaches proposed in BIBREF15 and BIBREF16 respectively.
LexRank, which constructs a graph representation of a document, where nodes are sentences and edges represent similarity between two sentences, and runs PageRank algorithm on that graph and extracts sentences based on the resulting PageRank values BIBREF17 . In the original implementation, sentences shorter than a certain threshold are removed. Our implementation does not do this removal to reduce the number of tunable hyperparameters. Also, it originally uses cross-sentence informational subsumption (CSIS) during sentence selection stage but the paper does not explain it well. Instead, we used an approximation to CSIS called cross-sentence word overlap described in BIBREF18 by the same authors.
TextRank, which is very similar to LexRank but computes sentence similarity based on the number of common tokens BIBREF19 .
For the non-neural supervised methods, we compared:
Bayes, which represents each sentence as a feature vector and uses naive Bayes to classify them BIBREF5 . The original paper computes TF-IDF score on multi-word tokens that are identified automatically using mutual information. We did not do this identification, so our TF-IDF computation operates on word tokens.
Hmm, which uses hidden Markov model where states correspond to whether the sentence should be extracted BIBREF20 . The original work uses QR decomposition for sentence selection but our implementation does not. We simply ranked the sentences by their scores and picked the top 3 as the summary.
MaxEnt, which represents each sentence as a feature vector and leverages maximum entropy model to compute the probability of a sentence should be extracted BIBREF21 . The original approach puts a prior distribution over the labels but we put the prior on the weights instead. Our implementation still agrees with the original because we employed a bias feature which should be able to learn the prior label distribution.
As for the neural supervised method, we evaluated NeuralSum BIBREF11 using the original implementation by the authors. We modified their implementation slightly to allow for evaluating the model with ROUGE. Note that all the methods are extractive. Our implementation code for all the methods above is available online.
As a baseline, we used Lead-N which selects INLINEFORM0 leading sentences as the summary. For all methods, we extracted 3 sentences as the summary since it is the median number of sentences in the gold summaries that we found in our exploratory analysis.
Experiment setup
Some of these approaches optionally require precomputed term frequency (TF) or inverse document frequency (IDF) table and a stopword list. We precomputed the TF and IDF tables from Indonesian Wikipedia dump data and used the stopword list provided in BIBREF22 . Hyperparameters were tuned to the development set of each fold, optimizing for R-1 as it correlates best with human judgment BIBREF23 . For NeuralSum, we tried several scenarios:
tuning the dropout rate while keeping other hyperparameters fixed,
increasing the word embedding size from the default 50 to 300,
initializing the word embedding with FastText pre-trained embedding BIBREF24 .
Scenario 2 is necessary to determine whether any improvement in scenario 3 is due to the larger embedding size or the pre-trained embedding. In scenario 2 and 3, we used the default hyperparameter setting from the authors' implementation. In addition, for every scenario, we picked the model saved at an epoch that yields the best R-1 score on the development set.
Overall results
Table TABREF26 shows the test INLINEFORM0 score of ROUGE-1, ROUGE-2, and ROUGE-L of all the tested models described previously. The mean and standard deviation (bracketed) of the scores are computed over the 5 folds. We put the score obtained by an oracle summarizer as Oracle. Its summaries are obtained by using the true labels. This oracle summarizer acts as the upper bound of an extractive summarizer on our dataset. As we can see, in general, every scenario of NeuralSum consistently outperforms the other models significantly. The best scenario is NeuralSum with word embedding size of 300, although its ROUGE scores are still within one standard deviation of NeuralSum with the default word embedding size. Lead-3 baseline performs really well and outperforms almost all the other models, which is not surprising and even consistent with other work that for news summarization, Lead-N baseline is surprisingly hard to beat. Slightly lower than Lead-3 are LexRank and Bayes, but their scores are still within one standard deviation of each other so their performance are on par. This result suggests that a non-neural supervised summarizer is not better than an unsupervised one, and thus if labeled data are available, it might be best to opt for a neural summarizer right away. We also want to note that despite its high ROUGE, every NeuralSum scenario scores are still considerably lower than Oracle, hinting that it can be improved further. Moreover, initializing with FastText pre-trained embedding slightly lowers the scores, although they are still within one standard deviation. This finding suggests that the effect of FastText pre-trained embedding is unclear for our case.
Out-of-domain results
Since Indonesian is a low-resource language, collecting in-domain dataset for any task (including summarization) can be difficult. Therefore, we experimented with out-of-domain scenario to see if NeuralSum can be used easily for a new use case for which the dataset is scarce or non-existent. Concretely, we trained the best NeuralSum (with word embedding size of 300) on articles belonging to category INLINEFORM0 and evaluated its performance on articles belonging to category INLINEFORM1 for all categories INLINEFORM2 and INLINEFORM3 . As we have a total of 6 categories, we have 36 domain pairs to experiment on. To reduce computational cost, we used only the articles from the first fold and did not tune any hyperparameters. We note that this decision might undermine the generalizability of conclusions drawn from these out-of-domain experiments. Nonetheless, we feel that the results can still be a useful guidance for future work. As comparisons, we also evaluated Lead-3, Oracle, and the best unsupervised method, LexRank. For LexRank, we used the best hyperparameter that we found in the previous experiment for the first fold. We only report the ROUGE-1 scores. Table TABREF27 shows the result of this experiment.
We see that almost all the results outperform the Lead-3 baseline, which means that for out-of-domain cases, NeuralSum can summarize not just by selecting some leading sentences from the original text. Almost all NeuralSum results also outperform LexRank, suggesting that when there is no in-domain training data, training NeuralSum on out-of-domain data may yield better performance than using an unsupervised model like LexRank. Looking at the best results, we observe that they all are the out-of-domain cases. In other words, training on out-of-domain data is surprisingly better than on in-domain data. For example, for Sport as the target domain, the best model is trained on Headline as the source domain. In fact, using Headline as the source domain yields the best result in 3 out of 6 target domains. We suspect that this phenomenon is because of the similarity between the corpus of the two domain. Specifically, training on Headline yields the best result most of the time because news from any domain can be headlines. Further investigation on this issue might leverage domain similarity metrics proposed in BIBREF25 . Next, comparing the best NeuralSum performance on each target domain to Oracle, we still see quite a large gap. This gap hints that NeuralSum can still be improved further, probably by lifting the limitations of our experiment setup (e.g., tuning the hyperparameters for each domain pair).
Conclusion and future work
We present IndoSum, a new benchmark dataset for Indonesian text summarization, and evaluated state-of-the-art extractive summarization methods on the dataset. We tested unsupervised, non-neural supervised, and neural supervised summarization methods. We used ROUGE as the evaluation metric because it is the standard intrinsic evaluation metric for text summarization evaluation. Our results show that neural models outperform non-neural ones and in absence of in-domain corpus, training on out-of-domain one seems to yield better performance instead of using an unsupervised summarizer. Also, we found that the best performing model achieves ROUGE scores that are still significantly lower than the maximum possible scores, which suggests that the dataset is sufficiently challenging for future work. The dataset, which consists of 19K article-summary pairs, is publicly available. We hope that the dataset and the evaluation results can serve as a benchmark for future research on Indonesian text summarization.
Future work in this area may focus on improving the summarizer performance by employing newer neural models such as SummaRuNNer BIBREF10 or incorporating side information BIBREF26 . Since the gold summaries are abstractive, abstractive summarization techniques such as attention-based neural models BIBREF27 , seq2seq models BIBREF28 , pointer networks BIBREF29 , or reinforcement learning-based approach BIBREF30 can also be interesting directions for future avenue. Other tasks such as further investigation on the out-of-domain issue, human evaluation, or even extending the corpus to include more than one summary per article are worth exploring as well. | 20K |
1af4d56eeaf74460ca2c621a2ad8a5d8dbac491c | 1af4d56eeaf74460ca2c621a2ad8a5d8dbac491c_0 | Q: Did they use a crowdsourcing platform for the summaries?
Text: Introduction
The goal of text summarization task is to produce a summary from a set of documents. The summary should retain important information and be reasonably shorter than the original documents BIBREF0 . When the set of documents contains only a single document, the task is usually referred to as single-document summarization. There are two kinds of summarization characterized by how the summary is produced: extractive and abstractive. Extractive summarization attempts to extract few important sentences verbatim from the original document. In contrast, abstractive summarization tries to produce an abstract which may contain sentences that do not exist in or are paraphrased from the original document.
Despite quite a few number of research on Indonesian text summarization, none of them were trained nor evaluated on a large, publicly available dataset. Also, although ROUGE BIBREF1 is the standard intrinsic evaluation metric for English text summarization, for Indonesian it does not seem so. Previous works rarely state explicitly that their evaluation was performed with ROUGE. The lack of a benchmark dataset and the different evaluation metrics make comparing among Indonesian text summarization research difficult.
In this work, we introduce IndoSum, a new benchmark dataset for Indonesian text summarization, and evaluated several well-known extractive single-document summarization methods on the dataset. The dataset consists of online news articles and has almost 200 times more documents than the next largest one of the same domain BIBREF2 . To encourage further research in this area, we make our dataset publicly available. In short, the contribution of this work is two-fold:
The state-of-the-art result on the dataset, although impressive, is still significantly lower than the maximum possible ROUGE score. This result suggests that the dataset is sufficiently challenging to be used as evaluation benchmark for future research on Indonesian text summarization.
Related work
Fachrurrozi et al. BIBREF3 proposed some scoring methods and used them with TF-IDF to rank and summarize news articles. Another work BIBREF4 used latent Dirichlet allocation coupled with genetic algorithm to produce summaries for online news articles. Simple methods like naive Bayes has also been used for Indonesian news summarization BIBREF2 , although for English, naive Bayes has been used almost two decades earlier BIBREF5 . A more recent work BIBREF6 employed a summarization algorithm called TextTeaser with some predefined features for news articles as well. Slamet et al. BIBREF7 used TF-IDF to convert sentences into vectors, and their similarities are then computed against another vector obtained from some keywords. They used these similarity scores to extract important sentences as the summary. Unfortunately, all these work do not seem to be evaluated using ROUGE, despite being the standard metric for text summarization research.
An example of Indonesian text summarization research which used ROUGE is BIBREF8 . They employed the best method on TAC 2011 competition for news dataset and achieved ROUGE-2 scores that are close to that of humans. However, their dataset consists of only 56 articles which is very small, and the dataset is not available publicly.
An attempt to make a public summarization dataset has been done in BIBREF9 . They compiled a chat dataset along with its summary, which has both the extractive and abstractive versions. This work is a good step toward standardizing summarization research for Indonesian. However, to the best of our knowledge, for news dataset, there has not been a publicly available dataset, let alone a standard.
IndoSum: a new benchmark dataset
We used a dataset provided by Shortir, an Indonesian news aggregator and summarizer company. The dataset contains roughly 20K news articles. Each article has the title, category, source (e.g., CNN Indonesia, Kumparan), URL to the original article, and an abstractive summary which was created manually by a total of 2 native speakers of Indonesian. There are 6 categories in total: Entertainment, Inspiration, Sport, Showbiz, Headline, and Tech. A sample article-summary pair is shown in Fig. FIGREF4 .
Note that 20K articles are actually quite small if we compare to English CNN/DailyMail dataset used in BIBREF11 which has 200K articles. Therefore, we used 5-fold cross-validation to split the dataset into 5 folds of training, development, and testing set. We preprocessed the dataset by tokenizing, lowercasing, removing punctuations, and replacing digits with zeros. We used NLTK BIBREF12 and spaCy for sentence and word tokenization respectively.
In our exploratory analysis, we discovered that some articles have a very long text and some summaries have too many sentences. Articles with a long text are mostly articles containing a list, e.g., list of songs played in a concert, list of award nominations, and so on. Since such a list is never included in the summary, we truncated such articles so that the number of paragraphs are at most two standard deviations away from the mean. For each fold, the mean and standard deviation were estimated from the training set. We discarded articles whose summary is too long since we do not want lengthy summaries anyway. The cutoff length is defined by the upper limit of the Tukey's boxplot, where for each fold, the quartiles were estimated from the training set. After removing such articles, we ended up with roughly 19K articles in total. The complete statistics of the corpus is shown in Table TABREF5 .
Since the gold summaries provided by Shortir are abstractive, we needed to label the sentences in the article for training the supervised extractive summarizers. We followed Nallapati et al. BIBREF10 to make these labeled sentences (called oracles hereinafter) using their greedy algorithm. The idea is to maximize the ROUGE score between the labeled sentences and the abstractive gold summary. Although the provided gold summaries are abstractive, in this work we focused on extractive summarization because we think research on this area are more mature, especially for Indonesian, and thus starting with extractive summarization is a logical first step toward standardizing Indonesian text summarization research.
Since there can be many valid summaries for a given article, having only a single abstractive summary for an article is a limitation of our dataset which we acknowledge. Nevertheless, we feel that the existence of such dataset is a crucial step toward a fair benchmark for Indonesian text summarization research. Therefore, we make the dataset publicly available for others to use.
Evaluation
For evaluation, we used ROUGE BIBREF1 , a standard metric for text summarization. We used the implementation provided by pythonrouge. Following BIBREF11 , we report the INLINEFORM0 score of R-1, R-2, and R-L. Intuitively, R-1 and R-2 measure informativeness and R-L measures fluency BIBREF11 . We report the INLINEFORM1 score instead of just the recall score because although we extract a fixed number of sentences as the summary, the number of words are not limited. So, reporting only recall benefits models which extract long sentences.
Compared methods
We compared several summarization methods which can be categorized into three groups: unsupervised, non-neural supervised, and neural supervised methods. For the unsupervised methods, we tested:
SumBasic, which uses word frequency to rank sentences and selects top sentences as the summary BIBREF13 , BIBREF14 .
Lsa, which uses latent semantic analysis (LSA) to decompose the term-by-sentence matrix of a document and extracts sentences based on the result. We experimented with the two approaches proposed in BIBREF15 and BIBREF16 respectively.
LexRank, which constructs a graph representation of a document, where nodes are sentences and edges represent similarity between two sentences, and runs PageRank algorithm on that graph and extracts sentences based on the resulting PageRank values BIBREF17 . In the original implementation, sentences shorter than a certain threshold are removed. Our implementation does not do this removal to reduce the number of tunable hyperparameters. Also, it originally uses cross-sentence informational subsumption (CSIS) during sentence selection stage but the paper does not explain it well. Instead, we used an approximation to CSIS called cross-sentence word overlap described in BIBREF18 by the same authors.
TextRank, which is very similar to LexRank but computes sentence similarity based on the number of common tokens BIBREF19 .
For the non-neural supervised methods, we compared:
Bayes, which represents each sentence as a feature vector and uses naive Bayes to classify them BIBREF5 . The original paper computes TF-IDF score on multi-word tokens that are identified automatically using mutual information. We did not do this identification, so our TF-IDF computation operates on word tokens.
Hmm, which uses hidden Markov model where states correspond to whether the sentence should be extracted BIBREF20 . The original work uses QR decomposition for sentence selection but our implementation does not. We simply ranked the sentences by their scores and picked the top 3 as the summary.
MaxEnt, which represents each sentence as a feature vector and leverages maximum entropy model to compute the probability of a sentence should be extracted BIBREF21 . The original approach puts a prior distribution over the labels but we put the prior on the weights instead. Our implementation still agrees with the original because we employed a bias feature which should be able to learn the prior label distribution.
As for the neural supervised method, we evaluated NeuralSum BIBREF11 using the original implementation by the authors. We modified their implementation slightly to allow for evaluating the model with ROUGE. Note that all the methods are extractive. Our implementation code for all the methods above is available online.
As a baseline, we used Lead-N which selects INLINEFORM0 leading sentences as the summary. For all methods, we extracted 3 sentences as the summary since it is the median number of sentences in the gold summaries that we found in our exploratory analysis.
Experiment setup
Some of these approaches optionally require precomputed term frequency (TF) or inverse document frequency (IDF) table and a stopword list. We precomputed the TF and IDF tables from Indonesian Wikipedia dump data and used the stopword list provided in BIBREF22 . Hyperparameters were tuned to the development set of each fold, optimizing for R-1 as it correlates best with human judgment BIBREF23 . For NeuralSum, we tried several scenarios:
tuning the dropout rate while keeping other hyperparameters fixed,
increasing the word embedding size from the default 50 to 300,
initializing the word embedding with FastText pre-trained embedding BIBREF24 .
Scenario 2 is necessary to determine whether any improvement in scenario 3 is due to the larger embedding size or the pre-trained embedding. In scenario 2 and 3, we used the default hyperparameter setting from the authors' implementation. In addition, for every scenario, we picked the model saved at an epoch that yields the best R-1 score on the development set.
Overall results
Table TABREF26 shows the test INLINEFORM0 score of ROUGE-1, ROUGE-2, and ROUGE-L of all the tested models described previously. The mean and standard deviation (bracketed) of the scores are computed over the 5 folds. We put the score obtained by an oracle summarizer as Oracle. Its summaries are obtained by using the true labels. This oracle summarizer acts as the upper bound of an extractive summarizer on our dataset. As we can see, in general, every scenario of NeuralSum consistently outperforms the other models significantly. The best scenario is NeuralSum with word embedding size of 300, although its ROUGE scores are still within one standard deviation of NeuralSum with the default word embedding size. Lead-3 baseline performs really well and outperforms almost all the other models, which is not surprising and even consistent with other work that for news summarization, Lead-N baseline is surprisingly hard to beat. Slightly lower than Lead-3 are LexRank and Bayes, but their scores are still within one standard deviation of each other so their performance are on par. This result suggests that a non-neural supervised summarizer is not better than an unsupervised one, and thus if labeled data are available, it might be best to opt for a neural summarizer right away. We also want to note that despite its high ROUGE, every NeuralSum scenario scores are still considerably lower than Oracle, hinting that it can be improved further. Moreover, initializing with FastText pre-trained embedding slightly lowers the scores, although they are still within one standard deviation. This finding suggests that the effect of FastText pre-trained embedding is unclear for our case.
Out-of-domain results
Since Indonesian is a low-resource language, collecting in-domain dataset for any task (including summarization) can be difficult. Therefore, we experimented with out-of-domain scenario to see if NeuralSum can be used easily for a new use case for which the dataset is scarce or non-existent. Concretely, we trained the best NeuralSum (with word embedding size of 300) on articles belonging to category INLINEFORM0 and evaluated its performance on articles belonging to category INLINEFORM1 for all categories INLINEFORM2 and INLINEFORM3 . As we have a total of 6 categories, we have 36 domain pairs to experiment on. To reduce computational cost, we used only the articles from the first fold and did not tune any hyperparameters. We note that this decision might undermine the generalizability of conclusions drawn from these out-of-domain experiments. Nonetheless, we feel that the results can still be a useful guidance for future work. As comparisons, we also evaluated Lead-3, Oracle, and the best unsupervised method, LexRank. For LexRank, we used the best hyperparameter that we found in the previous experiment for the first fold. We only report the ROUGE-1 scores. Table TABREF27 shows the result of this experiment.
We see that almost all the results outperform the Lead-3 baseline, which means that for out-of-domain cases, NeuralSum can summarize not just by selecting some leading sentences from the original text. Almost all NeuralSum results also outperform LexRank, suggesting that when there is no in-domain training data, training NeuralSum on out-of-domain data may yield better performance than using an unsupervised model like LexRank. Looking at the best results, we observe that they all are the out-of-domain cases. In other words, training on out-of-domain data is surprisingly better than on in-domain data. For example, for Sport as the target domain, the best model is trained on Headline as the source domain. In fact, using Headline as the source domain yields the best result in 3 out of 6 target domains. We suspect that this phenomenon is because of the similarity between the corpus of the two domain. Specifically, training on Headline yields the best result most of the time because news from any domain can be headlines. Further investigation on this issue might leverage domain similarity metrics proposed in BIBREF25 . Next, comparing the best NeuralSum performance on each target domain to Oracle, we still see quite a large gap. This gap hints that NeuralSum can still be improved further, probably by lifting the limitations of our experiment setup (e.g., tuning the hyperparameters for each domain pair).
Conclusion and future work
We present IndoSum, a new benchmark dataset for Indonesian text summarization, and evaluated state-of-the-art extractive summarization methods on the dataset. We tested unsupervised, non-neural supervised, and neural supervised summarization methods. We used ROUGE as the evaluation metric because it is the standard intrinsic evaluation metric for text summarization evaluation. Our results show that neural models outperform non-neural ones and in absence of in-domain corpus, training on out-of-domain one seems to yield better performance instead of using an unsupervised summarizer. Also, we found that the best performing model achieves ROUGE scores that are still significantly lower than the maximum possible scores, which suggests that the dataset is sufficiently challenging for future work. The dataset, which consists of 19K article-summary pairs, is publicly available. We hope that the dataset and the evaluation results can serve as a benchmark for future research on Indonesian text summarization.
Future work in this area may focus on improving the summarizer performance by employing newer neural models such as SummaRuNNer BIBREF10 or incorporating side information BIBREF26 . Since the gold summaries are abstractive, abstractive summarization techniques such as attention-based neural models BIBREF27 , seq2seq models BIBREF28 , pointer networks BIBREF29 , or reinforcement learning-based approach BIBREF30 can also be interesting directions for future avenue. Other tasks such as further investigation on the out-of-domain issue, human evaluation, or even extending the corpus to include more than one summary per article are worth exploring as well. | No |
3f5f74c39a560b5d916496e05641783c58af2c5d | 3f5f74c39a560b5d916496e05641783c58af2c5d_0 | Q: How are the synthetic examples generated?
Text: Introduction
In the last few years, research in natural text generation (NLG) has made significant progress, driven largely by the neural encoder-decoder paradigm BIBREF0, BIBREF1 which can tackle a wide array of tasks including translation BIBREF2, summarization BIBREF3, BIBREF4, structured-data-to-text generation BIBREF5, BIBREF6, BIBREF7 dialog BIBREF8, BIBREF9 and image captioning BIBREF10. However, progress is increasingly impeded by the shortcomings of existing metrics BIBREF7, BIBREF11, BIBREF12.
Human evaluation is often the best indicator of the quality of a system. However, designing crowd sourcing experiments is an expensive and high-latency process, which does not easily fit in a daily model development pipeline. Therefore, NLG researchers commonly use automatic evaluation metrics, which provide an acceptable proxy for quality and are very cheap to compute. This paper investigates sentence-level, reference-based metrics, which describe the extent to which a candidate sentence is similar to a reference one. The exact definition of similarity may range from string overlap to logical entailment.
The first generation of metrics relied on handcrafted rules that measure the surface similarity between the sentences. To illustrate, BLEU BIBREF13 and ROUGE BIBREF14, two popular metrics, rely on N-gram overlap. Because those metrics are only sensitive to lexical variation, they cannot appropriately reward semantic or syntactic variations of a given reference. Thus, they have been repeatedly shown to correlate poorly with human judgment, in particular when all the systems to compare have a similar level of accuracy BIBREF15, BIBREF16, BIBREF17.
Increasingly, NLG researchers have addressed those problems by injecting learned components in their metrics. To illustrate, consider the WMT Metrics Shared Task, an annual benchmark in which translation metrics are compared on their ability to imitate human assessments. The last two years of the competition were largely dominated by neural net-based approaches, RUSE, YiSi and ESIM BIBREF18, BIBREF11. Current approaches largely fall into two categories. Fully learned metrics, such as BEER, RUSE, and ESIM are trained end-to-end, and they typically rely on handcrafted features and/or learned embeddings. Conversely, hybrid metrics, such as YiSi and BERTscore combine trained elements, e.g., contextual embeddings, with handwritten logic, e.g., as token alignment rules. The first category typically offers great expressivity: if a training set of human ratings data is available, the metrics may take full advantage of it and fit the ratings distribution tightly. Furthermore, learned metrics can be tuned to measure task-specific properties, such as fluency, faithfulness, grammatically, or style. On the other hand, hybrid metrics offer robustness. They may provide better results when there is little to no training data, and they do not rely on the assumption that training and test data are identically distributed.
And indeed, the iid assumption is particularly problematic in NLG evaluation because of domain drifts, that have been the main target of the metrics literature, but also because of quality drifts: NLG systems tend to get better over time, and therefore a model trained on ratings data from 2015 may fail to distinguish top performing systems in 2019, especially for newer research tasks. An ideal learned metric would be able to both take full advantage of available ratings data for training, and be robust to distribution drifts, i.e., it should be able to extrapolate.
Our insight is that it is possible to combine expressivity and robustness by pre-training a fully learned metric on large amounts of synthetic data, before fine-tuning it on human ratings. To this end, we introduce Bleurt, a text generation metric based on BERT BIBREF19. A key ingredient of Bleurt is a novel pre-training scheme, which uses random perturbations of Wikipedia sentences augmented with a diverse set of lexical and semantic-level supervision signals.
To demonstrate our approach, we train Bleurt for English and evaluate it under different generalization regimes. We first verify that it provides state-of-the-art results on all recent years of the WMT Metrics Shared task (2017 to 2019, to-English language pairs). We then stress-test its ability to cope with quality drifts with a synthetic benchmark based on WMT 2017. Finally, we show that it can easily adapt to a different domain with three tasks from a data-to-text dataset, WebNLG 2017 BIBREF20. Ablations show that our synthetic pretraining scheme increases performance in the iid setting, and is critical to ensure robustness when the training data is scarce, skewed, or out-of-domain.
Preliminaries
Define $= (x_1,..,x_{r})$ to be the reference sentence of length $r$ where each $x_i$ is a token and let $\tilde{} = (\tilde{x}_1,..,\tilde{x}_{p})$ be a prediction sentence of length $p$. Let $\lbrace (_i, \tilde{}_i, y_i)\rbrace _{n=1}^{N}$ be a training dataset of size $N$ where $y_i \in [0, 1]$ is the human rating that indicates how good $\tilde{}_i$ is with respect to $_i$. Given the training data, our goal is to learn a function $: (, \tilde{}) \rightarrow y$ that predicts the human rating.
Fine-Tuning BERT for Quality Evaluation
Given the small amounts of rating data available, it is natural to leverage unsupervised representations for this task. In our model, we use BERT (Bidirectional Encoder Representations from Transformers) BIBREF19, which is an unsupervised technique that learns contextualized representations of sequences of text. Given $$ and $\tilde{}$, BERT is a Transformer BIBREF21 that returns a sequence of contextualized vectors:
where $_{\mathrm {[CLS]}}$ is the representation for the special $\mathrm {[CLS]}$ token. As described by devlin2018bert, we add a linear layer on top of the $\mathrm {[CLS]}$ vector to predict the rating:
where $$ and $$ are the weight matrix and bias vector respectively. Both the above linear layer as well as the BERT parameters are trained (i.e. fine-tuned) on the supervised data which typically numbers in a few thousand examples. We use the regression loss $\ell _{\textrm {supervised}} = \frac{1}{N} \sum _{n=1}^{N} \Vert y_i - \hat{y} \Vert ^2 $.
Although this approach is quite straightforward, we will show in Section SECREF5 that it gives state-of-the-art results on WMT Metrics Shared Task 17-19, which makes it a high-performing evaluation metric. However, fine-tuning BERT requires a sizable amount of iid data, which is less than ideal for a metric that should generalize to a variety of tasks and model drift.
Pre-Training on Synthetic Data
The key aspect of our approach is a pre-training technique that we use to “warm up” BERT before fine-tuning on rating data. We generate a large number of of synthetic reference-candidate pairs $(, \tilde{})$, and we train BERT on several lexical- and semantic-level supervision signals with a multitask loss. As our experiments will show, Bleurt generalizes much better after this phase, especially with incomplete training data.
Any pre-training approach requires a dataset and a set of pre-training tasks. Ideally, the setup should resemble the final NLG evaluation task, i.e., the sentence pairs should be distributed similarly and the pre-training signals should correlate with human ratings. Unfortunately, we cannot have access to the NLG models that we will evaluate in the future. Therefore, we optimized our scheme for generality, with three requirements. (1) The set of reference sentences should be large and diverse, so that Bleurt can cope with a wide range of NLG domains and tasks. (2) The sentence pairs should contain a wide variety of lexical, syntactic, and semantic dissimilarities. The aim here is to anticipate all variations that an NLG system may produce, e.g., phrase substitution, paraphrases, noise, or omissions. (3) The pre-training objectives should effectively capture those phenomena, so that Bleurt can learn to identify them. The following sections present our approach.
Pre-Training on Synthetic Data ::: Generating Sentence Pairs
One way to expose Bleurt to a wide variety of sentence differences is to use existing sentence pairs datasets BIBREF22, BIBREF23, BIBREF24. These sets are a rich source of related sentences, but they may fail to capture the errors and alterations that NLG systems produce (e.g., omissions, repetitions, nonsensical substitutions). We opted for an automatic approach instead, that can be scaled arbitrarily and at little cost: we generate synthetic sentence pairs $(, \tilde{})$ by randomly perturbing 1.8 million segments $$ from Wikipedia. We use three techniques: mask-filling with BERT, backtranslation, and randomly dropping out words. We obtain about 6.5 million perturbations $\tilde{}$. Let us describe those techniques.
Pre-Training on Synthetic Data ::: Generating Sentence Pairs ::: Mask-filling with BERT:
BERT's initial training task is to fill gaps (i.e., masked tokens) in tokenized sentences. We leverage this functionality by inserting masks at random positions in the Wikipedia sentences, and fill them with the language model. Thus, we introduce lexical alterations while maintaining the fluency of the sentence. We use two masking strategies—we either introduce the masks at random positions in the sentences, or we create contiguous sequences of masked tokens. More details are provided in the Appendix.
Pre-Training on Synthetic Data ::: Generating Sentence Pairs ::: Backtranslation:
We generate paraphrases and perturbations with backtranslation, that is, round trips from English to another language and then back to English with a translation model BIBREF25, BIBREF26, BIBREF27. Our primary aim is to create variants of the reference sentence that preserves semantics. Additionally, we use the mispredictions of the backtranslation models as a source of realistic alterations.
Pre-Training on Synthetic Data ::: Generating Sentence Pairs ::: Dropping words:
We found it useful in our experiments to randomly drop words from the synthetic examples above to create other examples. This method prepares Bleurt for “pathological” behaviors or NLG systems, e.g., void predictions, or sentence truncation.
Pre-Training on Synthetic Data ::: Pre-Training Signals
The next step is to augment each sentence pair $(, \tilde{})$ with a set of pre-training signals $\lbrace {\tau }_k\rbrace $, where ${\tau }_k$ is the target vector of pre-training task $k$. Good pre-training signals should capture a wide variety of lexical and semantic differences. They should also be cheap to obtain, so that the approach can scale to large amounts of synthetic data. The following section presents our 9 pre-training tasks, summarized in Table TABREF3. Additional implementation details are in the Appendix.
Pre-Training on Synthetic Data ::: Pre-Training Signals ::: Automatic Metrics:
We create three signals ${\tau _{\text{BLEU}}}$, ${\tau _{\text{ROUGE}}}$, and ${\tau _{\text{BERTscore}}}$ with sentence BLEU BIBREF13, ROUGE BIBREF14, and BERTscore BIBREF28 respectively (we use precision, recall and F-score for the latter two).
Pre-Training on Synthetic Data ::: Pre-Training Signals ::: Backtranslation Likelihood:
The idea behind this signal is to leverage existing translation models to measure semantic equivalence. Given a pair $(, \tilde{})$, this training signal measures the probability that $\tilde{}$ is a backtranslation of $$, $P(\tilde{} | )$, normalized by the length of $\tilde{}$. Let $P_{\texttt {en}\rightarrow \texttt {fr}}(_{\texttt {fr}} | )$ be a translation model that assigns probabilities to French sentences $_{\texttt {fr}}$ conditioned on English sentences $$ and let $P_{\texttt {fr}\rightarrow \texttt {en}}(| _{\texttt {fr}})$ be a translation model that assigns probabilities to English sentences given french sentences. If $|\tilde{}|$ is the number of tokens in $\tilde{}$, we define our score as $ {\tau }_{\text{en-fr}, \tilde{} \mid } = \frac{\log P(\tilde{} | )}{|\tilde{}|}$, with:
Because computing the summation over all possible French sentences is intractable, we approximate the sum using $_{\texttt {fr}}^\ast = P_{\texttt {en}\rightarrow \texttt {fr}} (_{\texttt {fr}} | )$ and we assume that $P_{\texttt {en}\rightarrow \texttt {fr}}(_{\texttt {fr}}^\ast | ) \approx 1$:
We can trivially reverse the procedure to compute $P(| \tilde{})$, thus we create 4 pre-training signals ${\tau }_{\text{en-fr}, \mid \tilde{}}$, ${\tau }_{\text{en-fr}, \tilde{} \mid }$, ${\tau }_{\text{en-de}, \mid \tilde{}}$, ${\tau }_{\text{en-de}, \tilde{} \mid }$ with two pairs of languages ($\texttt {en}\leftrightarrow \texttt {de}$ and $\texttt {en}\leftrightarrow \texttt {fr}$) in both directions.
Pre-Training on Synthetic Data ::: Pre-Training Signals ::: Textual Entailment:
The signal ${\tau }_\text{entail}$ expresses whether $$ entails or contradicts $\tilde{}$ using a classifier. We report the probability of three labels: Entail, Contradict, and Neutral, using BERT fine-tuned on an entailment dataset, MNLI BIBREF19, BIBREF23.
Pre-Training on Synthetic Data ::: Pre-Training Signals ::: Backtranslation flag:
The signal ${\tau }_\text{backtran\_flag}$ is a Boolean that indicates whether the perturbation was generated with backtranslation or with mask-filling.
Pre-Training on Synthetic Data ::: Modeling
For each pre-training task, our model uses either a regression or a classification loss. We then aggregate the task-level losses with a weighted sum.
Let ${\tau }_k$ describe the target vector for each task, e.g., the probabilities for the classes Entail, Contradict, Neutral, or the precision, recall, and F-score for ROUGE. If ${\tau }_k$ is a regression task, then the loss used is the $\ell _2$ loss i.e. $\ell _k = \Vert {\tau }_k - \hat{{\tau }}_k \Vert _2^2 / |{\tau }_k|$ where $|{\tau }_k|$ is the dimension of ${\tau }_k$ and $\hat{{\tau }}_k$ is computed by using a task-specific linear layer on top of the $\textrm {[CLS]}$ embedding: $\hat{{\tau }}_k = _{\tau _k} \tilde{}_{\textrm {[CLS]}} + _{\tau _k}$. If ${\tau }_k$ is a classification task, we use a separate linear layer to predict a logit for each class $c$: $\hat{{\tau }}_{kc} = _{\tau _{kc}} \tilde{}_{\textrm {[CLS]}} + _{\tau _{kc}}$, and we use the multiclass cross-entropy loss. We define our aggregate pre-training loss function as follows: pre-training = 1M m=1M k=1K k k(km, km) where ${\tau }_k^m$ is the target vector for example $m$, $M$ is number of synthetic examples, and $\gamma _k$ are hyperparameter weights obtained with grid search (more details in the Appendix).
Experiments
In this section, we report our experimental results for two tasks, translation and data-to-text. First, we benchmark Bleurt against existing text generation metrics on the last 3 years of the WMT Metrics Shared Task BIBREF29. We then evaluate its robustness to quality drifts with a series of synthetic datasets based on WMT17. We test Bleurt's ability to adapt to different tasks with the WebNLG 2017 Challenge Dataset BIBREF20. Finally, we measure the contribution of each pre-training task with ablation experiments.
Experiments ::: Our Models:
Unless specified otherwise, all Bleurt models are trained in three steps: regular BERT pre-training BIBREF19, pre-training on synthetic data (as explained in Section SECREF4), and fine-tuning on task-specific ratings (translation and/or data-to-text). We experiment with two versions of Bleurt, BLEURT and BLEURTbase, respectively based on BERT-Large (24 layers, 1024 hidden units, 16 heads) and BERT-Base (12 layers, 768 hidden units, 12 heads) BIBREF19, both uncased. We use batch size 32, learning rate 1e-5, and 800,000 steps for pre-training and 40,000 steps for fine-tuning. We provide the full detail of our training setup in the Appendix.
Experiments ::: WMT Metrics Shared Task ::: Datasets and Metrics:
We use years 2017 to 2019 of the WMT Metrics Shared Task, to-English language pairs. For each year, we used the official WMT test set, which include several thousand pairs of sentences with human ratings from the news domain. The training sets contain 5,360, 9,492, and 147,691 records for each year. The test sets for years 2018 and 2019 are noisier, as reported by the organizers and shown by the overall lower correlations.
We evaluate the agreement between the automatic metrics and the human ratings. For each year, we report two metrics: Kendall's Tau $\tau $ (for consistency across experiments), and the official WMT metric for that year (for completeness). The official WMT metric is either Pearson's correlation or a robust variant of Kendall's Tau called DARR, described in the Appendix. All the numbers come from our own implementation of the benchmark. Our results are globally consistent with the official results but we report small differences in 2018 and 2019, marked in the tables.
Experiments ::: WMT Metrics Shared Task ::: Models:
We experiment with four versions of Bleurt: BLEURT, BLEURTbase, BLEURT -pre and BLEURTbase -pre. The first two models are based on BERT-large and BERT-base. In the latter two versions, we skip the pre-training phase and fine-tune directly on the WMT ratings. For each year of the WMT shared task, we use the test set from the previous years for training and validation. We describe our setup in further detail in the Appendix. We compare Bleurt to participant data from the shared task and automatic metrics that we ran ourselves. In the former case, we use the the best-performing contestants for each year, that is, chrF++, BEER, Meteor++, RUSE, Yisi1, ESIM and Yisi1-SRL BIBREF30. All the contestants use the same WMT training data, in addition to existing sentence or token embeddings. In the latter case, we use Moses sentenceBLEU, BERTscore BIBREF28, and MoverScore BIBREF31. For BERTscore, we use BERT-large uncased for fairness, and roBERTa (the recommended version) for completeness BIBREF32. We run MoverScore on WMT 2017 using the scripts published by the authors.
Experiments ::: WMT Metrics Shared Task ::: Results:
Tables TABREF14, TABREF15, TABREF16 show the results. For years 2017 and 2018, a Bleurt-based metric dominates the benchmark for each language pair (Tables TABREF14 and TABREF15). BLEURT and BLEURTbase are also competitive for year 2019: they yield the best results for every language pair on Kendall's Tau, and they come first for 4 out of 7 pairs on DARR. As expected, BLEURT dominates BLEURTbase in the majority of cases. Pre-training consistently improves the results of BLEURT and BLEURTbase. We observe the largest effect on year 2017, where it adds up to 7.4 Kendall Tau points for BLEURTbase (zh-en). The effect is milder on years 2018 and 2019, up to 2.1 points (tr-en, 2018). We explain the difference by the fact that the training data used for 2017 is smaller than the datasets used for the following years, so pre-training is likelier to help. In general pre-training yields higher returns for BERT-base than for BERT-large—in fact, BLEURTbase with pre-training is often better than BLEURT without.
Takeaways: Pre-training delivers consistent improvements, especially for BERT-base. Bleurt yields state-of-the art performance for all years of the WMT Metrics Shared task.
Experiments ::: Robustness to Quality Drift
We assess our claim that pre-training makes Bleurt robust to quality drifts, by constructing a series of tasks for which it is increasingly pressured to extrapolate. All the experiments that follow are based on the WMT Metrics Shared Task 2017, because the ratings for this edition are particularly reliable.
Experiments ::: Robustness to Quality Drift ::: Methodology:
We create increasingly challenging datasets by sub-sampling the records from the WMT Metrics shared task, keeping low-rated translations for training and high-rated translations for test. The key parameter is the skew factor $\alpha $, that measures how much the training data is left-skewed and the test data is right-skewed. Figure FIGREF24 demonstrates the ratings distribution that we used in our experiments. The training data shrinks as $\alpha $ increases: in the most extreme case ($\alpha =3.0$), we use only 11.9% of the original 5,344 training records. We give the full detail of our sampling methodology in the Appendix.
We use BLEURT with and without pre-training and we compare to Moses sentBLEU and BERTscore. We use BERT-large uncased for both BLEURT and BERTscore.
Experiments ::: Robustness to Quality Drift ::: Results:
Figure FIGREF25 presents Bleurt's performance as we vary the train and test skew independently. Our first observation is that the agreements fall for all metrics as we increase the test skew. This effect was already described is the 2019 WMT Metrics report BIBREF11. A common explanation is that the task gets more difficult as the ratings get closer—it is easier to discriminate between “good” and “bad” systems than to rank “good” systems.
Training skew has a disastrous effect on Bleurt without pre-training: it is below BERTscore for $\alpha =1.0$, and it falls under sentBLEU for $\alpha \ge 1.5$. Pre-trained Bleurt is much more robust: the only case in which it falls under the baselines is $\alpha =3.0$, the most extreme drift, for which incorrect translations are used for train while excellent ones for test.
Experiments ::: Robustness to Quality Drift ::: Takeaways:
Pre-training makes BLEURT significantly more robust to quality drifts.
Experiments ::: WebNLG Experiments
In this section, we evaluate Bleurt's performance on three tasks from a data-to-text dataset, the WebNLG Challenge 2017 BIBREF33. The aim is to assess Bleurt's capacity to adapt to new tasks with limited training data.
Experiments ::: WebNLG Experiments ::: Dataset and Evaluation Tasks:
The WebNLG challenge benchmarks systems that produce natural language description of entities (e.g., buildings, cities, artists) from sets of 1 to 5 RDF triples. The organizers released the human assessments for 9 systems over 223 inputs, that is, 4,677 sentence pairs in total (we removed null values). Each input comes with 1 to 3 reference descriptions. The submissions are evaluated on 3 aspects: semantics, grammar, and fluency. We treat each type of rating as a separate modeling task. The data has no natural split between train and test, therefore we experiment with several schemes. We allocate 0% to about 50% of the data to training, and we split on both the evaluated systems or the RDF inputs in order to test different generalization regimes.
Experiments ::: WebNLG Experiments ::: Systems and Baselines:
BLEURT -pre -wmt, is a public BERT-large uncased checkpoint directly trained on the WebNLG ratings. BLEURT -wmtwas first pre-trained on synthetic data, then fine-tuned on WebNLG data. BLEURT was trained in three steps: first on synthetic data, then on WMT data (16-18), and finally on WebNLG data. When a record comes with several references, we run BLEURT on each reference and report the highest value BIBREF28.
We report four baselines: BLEU, TER, Meteor, and BERTscore. The first three were computed by the WebNLG competition organizers. We ran the latter one ourselves, using BERT-large uncased for a fair comparison.
Experiments ::: WebNLG Experiments ::: Results:
Figure FIGREF26 presents the correlation of the metrics with human assessments as we vary the share of data allocated to training. The more pre-trained Bleurt is, the quicker it adapts. The vanilla BERT approach BLEURT -pre -wmt requires about a third of the WebNLG data to dominate the baselines on the majority of tasks, and it still lags behind on semantics (split by system). In contrast, BLEURT -wmt is competitive with as little as 836 records, and Bleurt is comparable with BERTscore with zero fine-tuning.
Experiments ::: WebNLG Experiments ::: Takeaways:
Thanks to pre-training, Bleurt can quickly adapt to the new tasks. Bleurt fine-tuned twice (first on synthetic data, then on WMT data) provides acceptable results on all tasks without training data.
Experiments ::: Ablation Experiments
Figure FIGREF36 presents our ablation experiments on WMT 2017, which highlight the relative importance of each pre-training task. On the left side, we compare Bleurt pre-trained on a single task to Bleurt without pre-training. On the right side, we compare full Bleurt to Bleurt pre-trained on all tasks except one. Pre-training on BERTscore, entailment, and the backtranslation scores yield improvements (symmetrically, ablating them degrades Bleurt). Oppositely, BLEU and ROUGE have a negative impact. We conclude that pre-training on high quality signals helps BLEURT, but that metrics that correlate less well with human judgment may in fact harm the model.
Related Work
The WMT shared metrics competition BIBREF34, BIBREF18, BIBREF11 has inspired the creation of many learned metrics, some of which use regression or deep learning BIBREF35, BIBREF36, BIBREF37, BIBREF38, BIBREF30. Other metrics have been introduced, such as the recent MoverScore BIBREF31 which combines contextual embeddings and Earth Mover's Distance. We provide a head-to-head comparison with the best performing of those in our experiments. Other approaches do not attempt to estimate quality directly, but use information extraction or question answering as a proxy BIBREF7, BIBREF39, BIBREF40. Those are complementary to our work.
There has been recent work that uses BERT for evaluation. BERTScore BIBREF28 proposes replacing the hard n-gram overlap of BLEU with a soft-overlap using BERT embeddings. We use it in all our experiments. Bertr BIBREF30 and YiSi BIBREF30 also make use of BERT embeddings to compute a similarity score. Sum-QE BIBREF41 fine-tunes BERT for quality estimation as we describe in Section SECREF3. Our focus is different—we train metrics that are not only state-of-the-art in conventional iid experimental setups, but also robust in the presence of scarce and out-of-distribution training data. To our knowledge no existing work has explored pre-training and extrapolation in the context of NLG.
Noisy pre-training has been proposed before for other tasks such as paraphrasing BIBREF42, BIBREF43 but generally not with synthetic data. Generating synthetic data via paraphrases and perturbations has been commonly used for generating adversarial examples BIBREF44, BIBREF45, BIBREF46, BIBREF47, an orthogonal line of research.
Conclusion
We presented Bleurt, a reference-based text generation metric for English. Because the metric is trained end-to-end, Bleurt can model human assessment with superior accuracy. Furthermore, pre-training makes the metrics robust particularly robust to both domain and quality drifts. Future research directions include multilingual NLG evaluation, and hybrid methods involving both humans and classifiers.
Acknowledgments
Thanks to Eunsol Choi, Nicholas FitzGerald, Jacob Devlin, and to the members of the Google AI Language team for the proof-reading, feedback, and suggestions. We also thank Madhavan Kidambi and Ming-Wei Chang, who implemented blank-filling with BERT.
Implementation Details of the Pre-Training Phase
This section provides implementation details for some of the pre-training techniques described in the main paper.
Implementation Details of the Pre-Training Phase ::: Data Generation ::: Random Masking:
We use two masking strategies. The first strategy samples random words in the sentence and it replaces them with masks (one for each token). Thus, the masks are scattered across the sentence. The second strategy creates contiguous sequences: it samples a start position $s$, a length $l$ (uniformly distributed), and it masks all the tokens spanned by words between positions $s$ and $s+l$. In both cases, we use up to 15 masks per sentence. Instead of running the language model once and picking the most likely token at each position, we use beam search (the beam size 8 by default). This enforces consistency and avoids repeated sequences, e.g., “,,,”.
Implementation Details of the Pre-Training Phase ::: Data Generation ::: Backtranslation:
Consider English and French. Given a forward translation model $P_{\texttt {en}\rightarrow \texttt {fr}}(z_{\texttt {fr}} | z_{\texttt {en}})$ and backward translation model $P_{\texttt {fr}\rightarrow \texttt {en}}(z_{\texttt {en}} | z_{\texttt {fr}})$, we generate $\tilde{}$ as follows: = zen (Pfren(zen | zfr) ) where $z_{\texttt {fr}}^\ast = _{z_{\texttt {fr}}} \left( P_{\texttt {fr}\rightarrow \texttt {en}}(z_{\texttt {fr}} | z ) \right)$. For the translations, we use a Transformer model BIBREF21, trained on English-German with the tensor2tensor framework.
Implementation Details of the Pre-Training Phase ::: Data Generation ::: Word dropping:
Given a synthetic example $(, \tilde{})$ we generate a pair $(, \tilde{}^{\prime })$, by randomly dropping words from $\tilde{}$. We draw the number of words to drop uniformly, up to the length of the sentence. We apply this transformation on about 30% of the data generated with the previous method.
Implementation Details of the Pre-Training Phase ::: Modeling ::: Setting the weights of the pre-training tasks:
We set the weights $\gamma _k$ with grid search, optimizing Bleurt's performance on WMT 17's validation set. To reduce the size of the grid, we make groups of pre-training tasks that share the same weights: $({\tau }_{\text{BLEU}}, {\tau }_{\text{ROUGE}}, {\tau }_{\text{BERTscore}})$, $({\tau }_{\text{en-fr}, z \mid \tilde{z}}, {\tau }_{\text{en-fr}, \tilde{z} \mid z}, {\tau }_{\text{en-de}, z \mid \tilde{z}}, {\tau }_{\text{en-de}, \tilde{z} \mid z})$, and $({\tau }_{\text{entail}}, {\tau }_{\text{backtran\_flag}})$.
Implementation Details of the Pre-Training Phase ::: Pre-Training Tasks
We now provide additional details on the signals we uses for pre-training.
Implementation Details of the Pre-Training Phase ::: Pre-Training Tasks ::: Automatic Metrics:
As shown in the table, we use three types of signals: BLEU, ROUGE, and BERTscore. For BLEU, we used the original Moses sentenceBLEU implementation, using the Moses tokenizer and the default parameters. For ROUGE, we used the seq2seq implementation of ROUGE-N. We used a custom implementation of BERTscore, based on BERT-large uncased. ROUGE and BERTscore return three scores: precision, recall, and F-score. We use all three quantities.
Implementation Details of the Pre-Training Phase ::: Pre-Training Tasks ::: Backtranslation Likelihood:
We compute all the losses using custom Transformer model BIBREF21, trained on two language pairs (English-French and English-German) with the tensor2tensor framework.
Experiments–Supplementary Material ::: Training Setup for All Experiments
We user BERT's public checkpoints with Adam (the default optimizer), learning rate 1e-5, and batch size 32. Unless specified otherwise, we use 800,00 training steps for pre-training and 40,000 steps for fine-tuning. We run training and evaluation in parallel: we run the evaluation every 1,500 steps and store the checkpoint that performs best on a held-out validation set (more details on the data splits and our choice of metrics in the following sections). We use Google Cloud TPUs v2 for learning, and Nvidia Tesla V100 accelerators for evaluation and test. Our code uses Tensorflow 1.15 and Python 2.7.
Experiments–Supplementary Material ::: WMT Metric Shared Task ::: Metrics.
The metrics used to compare the evaluation systems vary across the years. The organizers use Pearson's correlation on standardized human judgments across all segments in 2017, and a custom variant of Kendall's Tau named “DARR” on raw human judgments in 2018 and 2019. The latter metrics operates as follows. The organizers gather all the translations for the same reference segment, they enumerate all the possible pairs $(\text{translation}_1, \text{translation}_2)$, and they discard all the pairs which have a “similar” score (less than 25 points away on a 100 points scale). For each remaining pair, they then determine which translation is the best according both human judgment and the candidate metric. Let $|\text{Concordant}|$ be the number of pairs on which the NLG metrics agree and $|\text{Discordant}|$ be those on which they disagree, then the score is computed as follows:
The idea behind the 25 points filter is to make the evaluation more robust, since the judgments collected for WMT 2018 and 2019 are noisy. Kendall's Tau is identical, but it does not use the filter.
Experiments–Supplementary Material ::: WMT Metric Shared Task ::: Training setup.
To separate training and validation data, we set aside a fixed ratio of records in such a way that there is no “leak” between the datasets (i.e., train and validation records that share the same source). We use 10% of the data for validation for years 2017 and 2018, and 5% for year 2019. We report results for the models that yield the highest Kendall Tau across all records on validation data. The weights associated to each pretraining task (see our Modeling section) are set with grid search, using the train/validation setup of WMT 2017.
Experiments–Supplementary Material ::: WMT Metric Shared Task ::: Baselines.
we use three metrics: the Moses implementation of sentenceBLEU, BERTscore, and MoverScore, which are all available online. We run the Moses tokenizer on the reference and candidate segments before computing sentenceBLEU.
Experiments–Supplementary Material ::: Robustness to Quality Drift ::: Data Re-sampling Methodology:
We sample the training and test separately, as follows. We split the data in 10 bins of equal size. We then sample each record in the dataset with probabilities $\frac{1}{B^\alpha }$ and $\frac{1}{(11-B)^\alpha }$ for train and test respectively, where $B$ is the bin index of the record between 1 and 10, and $\alpha $ is a predefined skew factor. The skew factor $\alpha $ controls the drift: a value of 0 has no effect (the ratings are centered around 0), and value of 3.0 yields extreme differences. Note that the sizes of the datasets decrease as $\alpha $ increases: we use 50.7%, 30.3%, 20.4%, and 11.9% of the original 5,344 training records for $\alpha =0.5$, $1.0$, $1.5$, and $3.0$ respectively.
Experiments–Supplementary Material ::: Ablation Experiment–How Much Pre-Training Time is Necessary?
To understand the relationship between pre-training time and downstream accuracy, we pre-train several versions of BLEURT and we fine-tune them on WMT17 data, varying the number of pre-training steps. Figure FIGREF60 presents the results. Most gains are obtained during the first 400,000 steps, that is, after about 2 epochs over our synthetic dataset. | Random perturbation of Wikipedia sentences using mask-filling with BERT, backtranslation and randomly drop out |
07f5e360e91b99aa2ed0284d7d6688335ed53778 | 07f5e360e91b99aa2ed0284d7d6688335ed53778_0 | Q: Do they measure the number of created No-Arc long sequences?
Text: Introduction
Greedy transition-based parsers are popular in NLP, as they provide competitive accuracy with high efficiency. They syntactically analyze a sentence by greedily applying transitions, which read it from left to right and produce a dependency tree.
However, this greedy process is prone to error propagation: one wrong choice of transition can lead the parser to an erroneous state, causing more incorrect decisions. This is especially crucial for long attachments requiring a larger number of transitions. In addition, transition-based parsers traditionally focus on only two words of the sentence and their local context to choose the next transition. The lack of a global perspective favors the presence of errors when creating arcs involving multiple transitions. As expected, transition-based parsers build short arcs more accurately than long ones BIBREF0 .
Previous research such as BIBREF1 and BIBREF2 proves that the widely-used projective arc-eager transition-based parser of Nivre2003 benefits from shortening the length of transition sequences by creating non-local attachments. In particular, they augmented the original transition system with new actions whose behavior entails more than one arc-eager transition and involves a context beyond the traditional two focus words. attardi06 and sartorio13 also extended the arc-standard transition-based algorithm BIBREF3 with the same success.
In the same vein, we present a novel unrestricted non-projective transition system based on the well-known algorithm by covington01fundamental that shortens the transition sequence necessary to parse a given sentence by the original algorithm, which becomes linear instead of quadratic with respect to sentence length. To achieve that, we propose new transitions that affect non-local words and are equivalent to one or more Covington actions, in a similar way to the transitions defined by Qi2017 based on the arc-eager parser. Experiments show that this novel variant significantly outperforms the original one in all datasets tested, and achieves the best reported accuracy for a greedy dependency parser on the Stanford Dependencies conversion of the WSJ Penn Treebank.
Non-Projective Covington Parser
The original non-projective parser defined by covington01fundamental was modelled under the transition-based parsing framework by Nivre2008. We only sketch this transition system briefly for space reasons, and refer to BIBREF4 for details.
Parser configurations have the form INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 are lists of partially processed words, INLINEFORM3 a list (called buffer) of unprocessed words, and INLINEFORM4 the set of dependency arcs built so far. Given an input string INLINEFORM5 , the parser starts at the initial configuration INLINEFORM6 and runs transitions until a terminal configuration of the form INLINEFORM7 is reached: at that point, INLINEFORM8 contains the dependency graph for the input.
The set of transitions is shown in the top half of Figure FIGREF1 . Their logic can be summarized as follows: when in a configuration of the form INLINEFORM0 , the parser has the chance to create a dependency involving words INLINEFORM1 and INLINEFORM2 , which we will call left and right focus words of that configuration. The INLINEFORM3 and INLINEFORM4 transitions are used to create a leftward ( INLINEFORM5 ) or rightward arc ( INLINEFORM6 ), respectively, between these words, and also move INLINEFORM7 from INLINEFORM8 to the first position of INLINEFORM9 , effectively moving the focus to INLINEFORM10 and INLINEFORM11 . If no dependency is desired between the focus words, the INLINEFORM12 transition makes the same modification of INLINEFORM13 and INLINEFORM14 , but without building any arc. Finally, the INLINEFORM15 transition moves the whole content of the list INLINEFORM16 plus INLINEFORM17 to INLINEFORM18 when no more attachments are pending between INLINEFORM19 and the words of INLINEFORM20 , thus reading a new input word and placing the focus on INLINEFORM21 and INLINEFORM22 . Transitions that create arcs are disallowed in configurations where this would violate the single-head or acyclicity constraints (cycles and nodes with multiple heads are not allowed in the dependency graph). Figure FIGREF4 shows the transition sequence in the Covington transition system which derives the dependency graph in Figure FIGREF3 .
The resulting parser can generate arbitrary non-projective trees, and its complexity is INLINEFORM0 .
Non-Projective NL-Covington Parser
The original logic described by covington01fundamental parses a sentence by systematically traversing every pair of words. The INLINEFORM0 transition, introduced by Nivre2008 in the transition-based version, is an optimization that avoids the need to apply a sequence of INLINEFORM1 transitions to empty the list INLINEFORM2 before reading a new input word.
However, there are still situations where sequences of INLINEFORM0 transitions are needed. For example, if we are in a configuration INLINEFORM1 with focus words INLINEFORM2 and INLINEFORM3 and the next arc we need to create goes from INLINEFORM4 to INLINEFORM5 INLINEFORM6 , then we will need INLINEFORM7 consecutive INLINEFORM8 transitions to move the left focus word to INLINEFORM9 and then apply INLINEFORM10 . This could be avoided if a non-local INLINEFORM11 transition could be undertaken directly at INLINEFORM12 , creating the required arc and moving INLINEFORM13 words to INLINEFORM14 at once. The advantage of such approach would be twofold: (1) less risk of making a mistake at INLINEFORM15 due to considering a limited local context, and (2) shorter transition sequence, alleviating error propagation.
We present a novel transition system called NL-Covington (for “non-local Covington”), described in the bottom half of Figure FIGREF1 . It consists in a modification of the non-projective Covington algorithm where: (1) the INLINEFORM0 and INLINEFORM1 transitions are parameterized with INLINEFORM2 , allowing the immediate creation of any attachment between INLINEFORM3 and the INLINEFORM4 th leftmost word in INLINEFORM5 and moving INLINEFORM6 words to INLINEFORM7 at once, and (2) the INLINEFORM8 transition is removed since it is no longer necessary.
This new transition system can use some restricted global information to build non-local dependencies and, consequently, reduce the number of transitions needed to parse the input. For instance, as presented in Figure FIGREF5 , the NL-Covington parser will need 9 transitions, instead of 12 traditional Covington actions, to analyze the sentence in Figure FIGREF3 .
In fact, while in the standard Covington algorithm a transition sequence for a sentence of length INLINEFORM0 has length INLINEFORM1 in the worst case (if all nodes are connected to the first node, then we need to traverse every node to the left of each right focus word); for NL-Covington the sequence length is always INLINEFORM2 : one INLINEFORM3 transition for each of the INLINEFORM4 words, plus one arc-building transition for each of the INLINEFORM5 arcs in the dependency tree. Note, however, that this does not affect the parser's time complexity, which is still quadratic as in the original Covington parser. This is because the algorithm has INLINEFORM6 possible transitions to be scored at each configuration, while the original Covington has INLINEFORM7 transitions due to being limited to creating local leftward/rightward arcs between the focus words.
The completeness and soundness of NL-Covington can easily be proved as there is a mapping between transition sequences of both parsers, where a sequence of INLINEFORM0 INLINEFORM1 and one arc transition in Covington is equivalent to a INLINEFORM2 or INLINEFORM3 in NL-Covington.
Data and Evaluation
We use 9 datasets from the CoNLL-X BIBREF5 and all datasets from the CoNLL-XI shared task BIBREF6 . To compare our system to the current state-of-the-art transition-based parsers, we also evaluate it on the Stanford Dependencies BIBREF7 conversion (using the Stanford parser v3.3.0) of the WSJ Penn Treebank BIBREF8 , hereinafter PT-SD, with standard splits. Labelled and Unlabelled Attachment Scores (LAS and UAS) are computed excluding punctuation only on the PT-SD, for comparability. We repeat each experiment with three independent random initializations and report the average accuracy. Statistical significance is assessed by a paired test with 10,000 bootstrap samples.
Model
To implement our approach we take advantage of the model architecture described in Qi2017 for the arc-swift parser, which extends the architecture of Kiperwasser2016 by applying a biaffine combination during the featurization process. We implement both the Covington and NL-Covington parsers under this architecture, adapt the featurization process with biaffine combination of Qi2017 to these parsers, and use their same training setup. More details about these model parameters are provided in Appendix SECREF6 .
Since this architecture uses batch training, we train with a static oracle. The NL-Covington algorithm has no spurious ambiguity at all, so there is only one possible static oracle: canonical transition sequences are generated by choosing the transition that builds the shortest pending gold arc involving the current right focus word INLINEFORM0 , or INLINEFORM1 if there are no unbuilt gold arcs involving INLINEFORM2 .
We note that a dynamic oracle can be obtained for the NL-Covington parser by adapting the one for standard Covington of GomFerACL2015. As NL-Covington transitions are concatenations of Covington ones, their loss calculation algorithm is compatible with NL-Covington. Apart from error exploration, this also opens the way to incorporating non-monotonicity BIBREF9 . While these approaches have shown to improve accuracy under online training settings, here we prioritize homogeneous comparability to BIBREF2 , so we use batch training and a static oracle, and still obtain state-of-the-art accuracy for a greedy parser.
Results
Table TABREF10 presents a comparison between the Covington parser and the novel variant developed here. The NL-Covington parser outperforms the original version in all datasets tested, with all improvements statistically significant ( INLINEFORM0 ).
Table TABREF12 compares our novel system with other state-of-the-art transition-based dependency parsers on the PT-SD. Greedy parsers are in the first block, beam-search and dynamic programming parsers in the second block. The third block shows the best result on this benchmark, obtained with constituent parsing with generative re-ranking and conversion to dependencies. Despite being the only non-projective parser tested on a practically projective dataset, our parser achieves the highest score among greedy transition-based models (even above those trained with a dynamic oracle).
We even slightly outperform the arc-swift system of Qi2017, with the same model architecture, implementation and training setup, but based on the projective arc-eager transition-based parser instead. This may be because our system takes into consideration any permissible attachment between the focus word INLINEFORM0 and any word in INLINEFORM1 at each configuration, while their approach is limited by the arc-eager logic: it allows all possible rightward arcs (possibly fewer than our approach as the arc-eager stack usually contains a small number of words), but only one leftward arc is permitted per parser state. It is also worth noting that the arc-swift and NL-Covington parsers have the same worst-case time complexity, ( INLINEFORM2 ), as adding non-local arc transitions to the arc-eager parser increases its complexity from linear to quadratic, but it does not affect the complexity of the Covington algorithm. Thus, it can be argued that this technique is better suited to Covington than to arc-eager parsing.
We also compare NL-Covington to the arc-swift parser on the CoNLL datasets (Table TABREF15 ). For fairness of comparison, we projectivize (via maltparser) all training datasets, instead of filtering non-projective sentences, as some of the languages are significantly non-projective. Even doing that, the NL-Covington parser improves over the arc-swift system in terms of UAS in 14 out of 19 datasets, obtaining statistically significant improvements in accuracy on 7 of them, and statistically significant decreases in just one.
Finally, we analyze how our approach reduces the length of the transition sequence consumed by the original Covington parser. In Table TABREF16 we report the transition sequence length per sentence used by the Covington and the NL-Covington algorithms to analyze each dataset from the same benchmark used for evaluating parsing accuracy. As seen in the table, NL-Covington produces notably shorter transition sequences than Covington, with a reduction close to 50% on average.
Conclusion
We present a novel variant of the non-projective Covington transition-based parser by incorporating non-local transitions, reducing the length of transition sequences from INLINEFORM0 to INLINEFORM1 . This system clearly outperforms the original Covington parser and achieves the highest accuracy on the WSJ Penn Treebank (Stanford Dependencies) obtained to date with greedy dependency parsing.
Acknowledgments
This work has received funding from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150), from the TELEPARES-UDC (FFI2014-51978-C2-2-R) and ANSWER-ASAP (TIN2017-85160-C2-1-R) projects from MINECO, and from Xunta de Galicia (ED431B 2017/01).
Model Details
We provide more details of the neural network architecture used in this paper, which is taken from Qi2017.
The model consists of two blocks of 2-layered bidirectional long short-term memory (BiLSTM) networks BIBREF23 with 400 hidden units in each direction. The first block is used for POS tagging and the second one, for parsing. As the input of the tagging block, we use words represented as word embeddings, and BiLSTMs are employed to perform feature extraction. The resulting output is fed into a multi-layer perceptron (MLP), with a hidden layer of 100 rectified linear units (ReLU), that provides a POS tag for each input token in a 32-dimensional representation. Word embeddings concatenated to these POS tag embeddings serve as input of the second block of BiLSTMs to undertake the parsing stage. Then, the output of the parsing block is fed into a MLP with two separate ReLU hidden layers (one for deriving the representation of the head, and the other for the dependency label) that, after being merged and by means of a softmax function, score all the feasible transitions, allowing to greedily choose and apply the highest-scoring one.
Moreover, we adapt the featurization process with biaffine combination described in Qi2017 for the arc-swift system to be used on the original Covington and NL-Covington parsers. In particular, arc transitions are featurized by the concatenation of the representation of the head and dependent words of the arc to be created, the INLINEFORM0 transition is featurized by the rightmost word in INLINEFORM1 and the leftmost word in the buffer INLINEFORM2 and, finally, for the INLINEFORM3 transition only the leftmost word in INLINEFORM4 is used. Unlike Qi2017 do for baseline parsers, we do not use the featurization method detailed in Kiperwasser2016 for the original Covington parser, as we observed that this results in lower scores and then the comparison would be unfair in our case. We implement both systems under the same framework, with the original Covington parser represented as the NL-Covington system plus the INLINEFORM5 transition and with INLINEFORM6 limited to 1. A thorough description of the model architecture and featurization mechanism can be found in Qi2017.
Our training setup is exactly the same used by Qi2017, training the models during 10 epochs for large datasets and 30 for small ones. In addition, we initialize word embeddings with 100-dimensional GloVe vectors BIBREF25 for English and use 300-dimensional Facebook vectors BIBREF20 for other languages. The other parameters of the neural network keep the same values.
The parser's source code is freely available at https://github.com/danifg/Non-Local-Covington. | No |
11dde2be9a69a025f2fc29ce647201fb5a4df580 | 11dde2be9a69a025f2fc29ce647201fb5a4df580_0 | Q: By how much does the new parser outperform the current state-of-the-art?
Text: Introduction
Greedy transition-based parsers are popular in NLP, as they provide competitive accuracy with high efficiency. They syntactically analyze a sentence by greedily applying transitions, which read it from left to right and produce a dependency tree.
However, this greedy process is prone to error propagation: one wrong choice of transition can lead the parser to an erroneous state, causing more incorrect decisions. This is especially crucial for long attachments requiring a larger number of transitions. In addition, transition-based parsers traditionally focus on only two words of the sentence and their local context to choose the next transition. The lack of a global perspective favors the presence of errors when creating arcs involving multiple transitions. As expected, transition-based parsers build short arcs more accurately than long ones BIBREF0 .
Previous research such as BIBREF1 and BIBREF2 proves that the widely-used projective arc-eager transition-based parser of Nivre2003 benefits from shortening the length of transition sequences by creating non-local attachments. In particular, they augmented the original transition system with new actions whose behavior entails more than one arc-eager transition and involves a context beyond the traditional two focus words. attardi06 and sartorio13 also extended the arc-standard transition-based algorithm BIBREF3 with the same success.
In the same vein, we present a novel unrestricted non-projective transition system based on the well-known algorithm by covington01fundamental that shortens the transition sequence necessary to parse a given sentence by the original algorithm, which becomes linear instead of quadratic with respect to sentence length. To achieve that, we propose new transitions that affect non-local words and are equivalent to one or more Covington actions, in a similar way to the transitions defined by Qi2017 based on the arc-eager parser. Experiments show that this novel variant significantly outperforms the original one in all datasets tested, and achieves the best reported accuracy for a greedy dependency parser on the Stanford Dependencies conversion of the WSJ Penn Treebank.
Non-Projective Covington Parser
The original non-projective parser defined by covington01fundamental was modelled under the transition-based parsing framework by Nivre2008. We only sketch this transition system briefly for space reasons, and refer to BIBREF4 for details.
Parser configurations have the form INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 are lists of partially processed words, INLINEFORM3 a list (called buffer) of unprocessed words, and INLINEFORM4 the set of dependency arcs built so far. Given an input string INLINEFORM5 , the parser starts at the initial configuration INLINEFORM6 and runs transitions until a terminal configuration of the form INLINEFORM7 is reached: at that point, INLINEFORM8 contains the dependency graph for the input.
The set of transitions is shown in the top half of Figure FIGREF1 . Their logic can be summarized as follows: when in a configuration of the form INLINEFORM0 , the parser has the chance to create a dependency involving words INLINEFORM1 and INLINEFORM2 , which we will call left and right focus words of that configuration. The INLINEFORM3 and INLINEFORM4 transitions are used to create a leftward ( INLINEFORM5 ) or rightward arc ( INLINEFORM6 ), respectively, between these words, and also move INLINEFORM7 from INLINEFORM8 to the first position of INLINEFORM9 , effectively moving the focus to INLINEFORM10 and INLINEFORM11 . If no dependency is desired between the focus words, the INLINEFORM12 transition makes the same modification of INLINEFORM13 and INLINEFORM14 , but without building any arc. Finally, the INLINEFORM15 transition moves the whole content of the list INLINEFORM16 plus INLINEFORM17 to INLINEFORM18 when no more attachments are pending between INLINEFORM19 and the words of INLINEFORM20 , thus reading a new input word and placing the focus on INLINEFORM21 and INLINEFORM22 . Transitions that create arcs are disallowed in configurations where this would violate the single-head or acyclicity constraints (cycles and nodes with multiple heads are not allowed in the dependency graph). Figure FIGREF4 shows the transition sequence in the Covington transition system which derives the dependency graph in Figure FIGREF3 .
The resulting parser can generate arbitrary non-projective trees, and its complexity is INLINEFORM0 .
Non-Projective NL-Covington Parser
The original logic described by covington01fundamental parses a sentence by systematically traversing every pair of words. The INLINEFORM0 transition, introduced by Nivre2008 in the transition-based version, is an optimization that avoids the need to apply a sequence of INLINEFORM1 transitions to empty the list INLINEFORM2 before reading a new input word.
However, there are still situations where sequences of INLINEFORM0 transitions are needed. For example, if we are in a configuration INLINEFORM1 with focus words INLINEFORM2 and INLINEFORM3 and the next arc we need to create goes from INLINEFORM4 to INLINEFORM5 INLINEFORM6 , then we will need INLINEFORM7 consecutive INLINEFORM8 transitions to move the left focus word to INLINEFORM9 and then apply INLINEFORM10 . This could be avoided if a non-local INLINEFORM11 transition could be undertaken directly at INLINEFORM12 , creating the required arc and moving INLINEFORM13 words to INLINEFORM14 at once. The advantage of such approach would be twofold: (1) less risk of making a mistake at INLINEFORM15 due to considering a limited local context, and (2) shorter transition sequence, alleviating error propagation.
We present a novel transition system called NL-Covington (for “non-local Covington”), described in the bottom half of Figure FIGREF1 . It consists in a modification of the non-projective Covington algorithm where: (1) the INLINEFORM0 and INLINEFORM1 transitions are parameterized with INLINEFORM2 , allowing the immediate creation of any attachment between INLINEFORM3 and the INLINEFORM4 th leftmost word in INLINEFORM5 and moving INLINEFORM6 words to INLINEFORM7 at once, and (2) the INLINEFORM8 transition is removed since it is no longer necessary.
This new transition system can use some restricted global information to build non-local dependencies and, consequently, reduce the number of transitions needed to parse the input. For instance, as presented in Figure FIGREF5 , the NL-Covington parser will need 9 transitions, instead of 12 traditional Covington actions, to analyze the sentence in Figure FIGREF3 .
In fact, while in the standard Covington algorithm a transition sequence for a sentence of length INLINEFORM0 has length INLINEFORM1 in the worst case (if all nodes are connected to the first node, then we need to traverse every node to the left of each right focus word); for NL-Covington the sequence length is always INLINEFORM2 : one INLINEFORM3 transition for each of the INLINEFORM4 words, plus one arc-building transition for each of the INLINEFORM5 arcs in the dependency tree. Note, however, that this does not affect the parser's time complexity, which is still quadratic as in the original Covington parser. This is because the algorithm has INLINEFORM6 possible transitions to be scored at each configuration, while the original Covington has INLINEFORM7 transitions due to being limited to creating local leftward/rightward arcs between the focus words.
The completeness and soundness of NL-Covington can easily be proved as there is a mapping between transition sequences of both parsers, where a sequence of INLINEFORM0 INLINEFORM1 and one arc transition in Covington is equivalent to a INLINEFORM2 or INLINEFORM3 in NL-Covington.
Data and Evaluation
We use 9 datasets from the CoNLL-X BIBREF5 and all datasets from the CoNLL-XI shared task BIBREF6 . To compare our system to the current state-of-the-art transition-based parsers, we also evaluate it on the Stanford Dependencies BIBREF7 conversion (using the Stanford parser v3.3.0) of the WSJ Penn Treebank BIBREF8 , hereinafter PT-SD, with standard splits. Labelled and Unlabelled Attachment Scores (LAS and UAS) are computed excluding punctuation only on the PT-SD, for comparability. We repeat each experiment with three independent random initializations and report the average accuracy. Statistical significance is assessed by a paired test with 10,000 bootstrap samples.
Model
To implement our approach we take advantage of the model architecture described in Qi2017 for the arc-swift parser, which extends the architecture of Kiperwasser2016 by applying a biaffine combination during the featurization process. We implement both the Covington and NL-Covington parsers under this architecture, adapt the featurization process with biaffine combination of Qi2017 to these parsers, and use their same training setup. More details about these model parameters are provided in Appendix SECREF6 .
Since this architecture uses batch training, we train with a static oracle. The NL-Covington algorithm has no spurious ambiguity at all, so there is only one possible static oracle: canonical transition sequences are generated by choosing the transition that builds the shortest pending gold arc involving the current right focus word INLINEFORM0 , or INLINEFORM1 if there are no unbuilt gold arcs involving INLINEFORM2 .
We note that a dynamic oracle can be obtained for the NL-Covington parser by adapting the one for standard Covington of GomFerACL2015. As NL-Covington transitions are concatenations of Covington ones, their loss calculation algorithm is compatible with NL-Covington. Apart from error exploration, this also opens the way to incorporating non-monotonicity BIBREF9 . While these approaches have shown to improve accuracy under online training settings, here we prioritize homogeneous comparability to BIBREF2 , so we use batch training and a static oracle, and still obtain state-of-the-art accuracy for a greedy parser.
Results
Table TABREF10 presents a comparison between the Covington parser and the novel variant developed here. The NL-Covington parser outperforms the original version in all datasets tested, with all improvements statistically significant ( INLINEFORM0 ).
Table TABREF12 compares our novel system with other state-of-the-art transition-based dependency parsers on the PT-SD. Greedy parsers are in the first block, beam-search and dynamic programming parsers in the second block. The third block shows the best result on this benchmark, obtained with constituent parsing with generative re-ranking and conversion to dependencies. Despite being the only non-projective parser tested on a practically projective dataset, our parser achieves the highest score among greedy transition-based models (even above those trained with a dynamic oracle).
We even slightly outperform the arc-swift system of Qi2017, with the same model architecture, implementation and training setup, but based on the projective arc-eager transition-based parser instead. This may be because our system takes into consideration any permissible attachment between the focus word INLINEFORM0 and any word in INLINEFORM1 at each configuration, while their approach is limited by the arc-eager logic: it allows all possible rightward arcs (possibly fewer than our approach as the arc-eager stack usually contains a small number of words), but only one leftward arc is permitted per parser state. It is also worth noting that the arc-swift and NL-Covington parsers have the same worst-case time complexity, ( INLINEFORM2 ), as adding non-local arc transitions to the arc-eager parser increases its complexity from linear to quadratic, but it does not affect the complexity of the Covington algorithm. Thus, it can be argued that this technique is better suited to Covington than to arc-eager parsing.
We also compare NL-Covington to the arc-swift parser on the CoNLL datasets (Table TABREF15 ). For fairness of comparison, we projectivize (via maltparser) all training datasets, instead of filtering non-projective sentences, as some of the languages are significantly non-projective. Even doing that, the NL-Covington parser improves over the arc-swift system in terms of UAS in 14 out of 19 datasets, obtaining statistically significant improvements in accuracy on 7 of them, and statistically significant decreases in just one.
Finally, we analyze how our approach reduces the length of the transition sequence consumed by the original Covington parser. In Table TABREF16 we report the transition sequence length per sentence used by the Covington and the NL-Covington algorithms to analyze each dataset from the same benchmark used for evaluating parsing accuracy. As seen in the table, NL-Covington produces notably shorter transition sequences than Covington, with a reduction close to 50% on average.
Conclusion
We present a novel variant of the non-projective Covington transition-based parser by incorporating non-local transitions, reducing the length of transition sequences from INLINEFORM0 to INLINEFORM1 . This system clearly outperforms the original Covington parser and achieves the highest accuracy on the WSJ Penn Treebank (Stanford Dependencies) obtained to date with greedy dependency parsing.
Acknowledgments
This work has received funding from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150), from the TELEPARES-UDC (FFI2014-51978-C2-2-R) and ANSWER-ASAP (TIN2017-85160-C2-1-R) projects from MINECO, and from Xunta de Galicia (ED431B 2017/01).
Model Details
We provide more details of the neural network architecture used in this paper, which is taken from Qi2017.
The model consists of two blocks of 2-layered bidirectional long short-term memory (BiLSTM) networks BIBREF23 with 400 hidden units in each direction. The first block is used for POS tagging and the second one, for parsing. As the input of the tagging block, we use words represented as word embeddings, and BiLSTMs are employed to perform feature extraction. The resulting output is fed into a multi-layer perceptron (MLP), with a hidden layer of 100 rectified linear units (ReLU), that provides a POS tag for each input token in a 32-dimensional representation. Word embeddings concatenated to these POS tag embeddings serve as input of the second block of BiLSTMs to undertake the parsing stage. Then, the output of the parsing block is fed into a MLP with two separate ReLU hidden layers (one for deriving the representation of the head, and the other for the dependency label) that, after being merged and by means of a softmax function, score all the feasible transitions, allowing to greedily choose and apply the highest-scoring one.
Moreover, we adapt the featurization process with biaffine combination described in Qi2017 for the arc-swift system to be used on the original Covington and NL-Covington parsers. In particular, arc transitions are featurized by the concatenation of the representation of the head and dependent words of the arc to be created, the INLINEFORM0 transition is featurized by the rightmost word in INLINEFORM1 and the leftmost word in the buffer INLINEFORM2 and, finally, for the INLINEFORM3 transition only the leftmost word in INLINEFORM4 is used. Unlike Qi2017 do for baseline parsers, we do not use the featurization method detailed in Kiperwasser2016 for the original Covington parser, as we observed that this results in lower scores and then the comparison would be unfair in our case. We implement both systems under the same framework, with the original Covington parser represented as the NL-Covington system plus the INLINEFORM5 transition and with INLINEFORM6 limited to 1. A thorough description of the model architecture and featurization mechanism can be found in Qi2017.
Our training setup is exactly the same used by Qi2017, training the models during 10 epochs for large datasets and 30 for small ones. In addition, we initialize word embeddings with 100-dimensional GloVe vectors BIBREF25 for English and use 300-dimensional Facebook vectors BIBREF20 for other languages. The other parameters of the neural network keep the same values.
The parser's source code is freely available at https://github.com/danifg/Non-Local-Covington. | Proposed method achieves 94.5 UAS and 92.4 LAS compared to 94.3 and 92.2 of best state-of-the -art greedy based parser. Best state-of-the art parser overall achieves 95.8 UAS and 94.6 LAS. |
bcce5eef9ddc345177b3c39c469b4f8934700f80 | bcce5eef9ddc345177b3c39c469b4f8934700f80_0 | Q: Do they evaluate only on English datasets?
Text: Introduction
A cryptocurrency is a digital currency designed to work as a medium of exchange that uses strong cryptography to secure financial transactions, control the creation of additional units, and verify the transfer of assets. They are based on decentralized systems built on block-chain technology, a distributed ledger enforced by a disparate network of computers BIBREF0. The first decentralized cryptocurrency, Bitcoin, was released as open-source software in 2009. After this release, approximately 4000 altcoins (other cryptocurrencies) have been released. As of August 2019, the total market capitalization of cryptocurrencies is $258 billion, where Bitcoin alone has a market capitalization of $179 billion BIBREF1.
Considering the huge market value of these currencies, they have attracted significant attention, where some people consider them as actual currencies and others as investment opportunities. This has resulted in large fluctuations in their prices. For instance in 2017 the value of Bitcoin increased approximately 2000% from $863 on January 9, 2017 to a high of $17,900 on December 15, 2017. However, eight weeks later, on February 5, 2018, the price had been more than halved to a value of just $6200 BIBREF2.
This high volatility in the value of cryptocurrencies means there is uncertainty for both investors, and for people who intend to use them as an actual currency. Cryptocurrency prices do not behave as traditional currencies and, therefore, it is difficult to determine what leads to this volatility. This in turn makes it a challenge to correctly predict the future prices of any cryptocurrency. To predict these prices, huge heterogeneous data volumes need to be collected from various sources such as blogs, IRC channels and social media. Especially, tweets from highly influential people and mass has significant effects on the price of cryptocurrency BIBREF3. However, tweets need to be filtered and their sentiments need to be calculated in a timely fashion to help predict cryptocurrency prices in real time. Furthermore, real-time prediction also calls for real-time updating of learning algorithms, which introduces an additional difficulty. These challenges call for learning platforms based on big data architectures that can not only handle heterogeneous volumes of data but also be fault tolerant and persistent in real time.
In this paper we provide a novel real-time and adaptive cryptocurrency price prediction platform based on Twitter sentiments. The integrative and modular platform copes with the three aforementioned challenges in several ways. Firstly, it provides a Spark-based architecture which handles the large volume of incoming data in a persistent and fault tolerant way. Secondly, the proposed platform offers an approach that supports sentiment analysis based on VADER which can respond to large amounts of natural language processing queries in real time. Thirdly, the platform supports a predictive approach based on online learning in which a machine learning model adapts its weights to cope with new prices and sentiments. Finally, the platform is modular and integrative in the sense that it combines these different solutions to provide novel real-time tool support for bitcoin price prediction that is more scalable, data-rich, and proactive, and can help accelerate decision-making, uncover new opportunities and provide more timely insights based on the available and ever-larger financial data volume and variety.
The rest of the paper is organized as follows. Section 2 discusses the related work proposed in the literature. Section 3 discusses the design and implementation of KryptoOracle in detail and includes the description of all of its sub-components. Section 4 presents an experimental evaluation, including experimental data, setup and results. Finally, section 5 concludes the paper and describes future work.
Related Work
In this section we present a brief review of the state of the art related to cryptocurrency price prediction. Related works can be divided into three main categories: (i) social media sentiments and financial markets (including cryptocurrency markets); (ii) machine learning for cryptocurrency price prediction; and (iii) big data platforms for financial market prediction.
The `prospect theory' framed by Daniel Kahneman and Amos Tversky presents that financial decisions are significantly influenced by risk and emotions, and not just the value alone BIBREF4. This is further reinforced by other works in economic psychology and decision making such as BIBREF5 which show that variations in feelings that are widely experienced by people, influence investor decision-making and, consequently, lead to predictable patterns in equity pricing. These insights, therefore, open the possibility to leverage techniques such as sentiment analysis to identify patterns that could affect the price of an entity.
Considering the emergence and ubiquity of media, especially social media, further works have explored how it effects user sentiment and therefore financial markets. Paul Tetlock in BIBREF6, explains how high media pessimism predicts downward pressure on market prices, and unusually high or low pessimism predicts high trading volume. Moreover, Gartner found in a study that majority of consumers use social networks to inform buying decisions BIBREF7. This insight has given rise to several research materials which have attempted to find correlations between media sentiments and different financial markets.
The authors in BIBREF8 retrieve, extract, and analyze the effects of news sentiments on the stock market. They develop a sentiment analysis dictionary for the financial sector leading to a dictionary-based sentiment analysis model. With this model trained only on news sentiments, the paper achieved a directional accuracy of 70.59% in predicting the trends in short-term stock price movement. The authors in BIBREF9 use the sentiment of message board comments to predict the stock movement. Unlike other approaches where the overall moods or sentiments are considered, this paper extracts the ‘topic-sentiment’ feature, which represents the sentiments of the specific topics of the company and uses that for stock forecasting. Using this method the accuracy average over 18 stocks in one year transactions, achieved 2.07% better performance than the model using historical prices only. Similarly, Alan Dennis and Lingyao Yuan collected valence scores on tweets about the companies in the S&P 500 and found that they correlated with stock prices BIBREF10. The authors in BIBREF11 used a self-organizing fuzzy neural network, with Twitter mood from sentiment as an input, to predict price changes in the DOW Jones Industrial average and achieved a 86.7% accuracy.
With the recent emergence of cryptocurrencies and the widespread investment in them, has motivated researchers to try to predict their price variations. The authors in BIBREF2 predict price fluctuations for three cryptocurrencies: Bitcoin, Litecoin and Ethereum. The news and social media data was labeled based on actual price changes one day in the future for each coin, rather than on positive or negative sentiment. By taking this approach, the model was able to directly predict price fluctuations instead of needing to first predict sentiment. Logistic regression worked best for Bitcoin predictions and the model was able to predict 43.9% of price increases and 61.9% of price decreases correctly. A work by Abhraham et al. uses Twitter sentiment and google trends data to predict the price of Bitcoin and Ethereum BIBREF12. The paper uses the tweet volume in addition to the Twitter sentiment to establish a correlation with cryptocurrency price.
KryptoOracle draws greatest inspiration from BIBREF13 and BIBREF14. Both works use Twitter sentiments to find correlation with Bitcoin prices. The tweets are cleaned of non-alphanumeric symbols and then processed with VADER (Valence Aware Dictionary and sEntiment Reasoner) to analyze the sentiment of each tweet and classify it as negative, neutral, or positive. The compound sentiment score is then used to establish correlation with the Bitcoin prices over different lag intervals. KryptoOracle builds on what has been discussed above but goes beyond to construct a prediction engine that forecasts Bitcoin prices at specified intervals.
Machine learning has also been employed directly for cryptocurrency price prediction. For instance, the authors in BIBREF15 contribute to the Bitcoin forecasting literature by testing auto-regressive integrated moving average (ARIMA) and neural network auto-regression (NNAR) models to forecast the daily price movement based only on the historical price points. Similarly the author in BIBREF16 presents a Neural Network framework to provide a deep machine learning solution to the cryptocurrency price prediction problem. The framework is realized in three instants with a Multi-layer Perceptron (MLP), a simple Recurrent Neural Network (RNN) and a Long Short-Term Memory (LSTM), which can learn long dependencies. In contrast our prediction model in addition to considering the social media influence, also employs online learning to continuously learn from its mistakes and improve itself in the process.
Since our engine is designed to run for an indefinite amount of time and it continuously obtains real-time data, it is inevitable that this will lead to data storage concerns in the long run. Therefore, we treat our objective as a big data problem and employ big data tools to ensure scalability and performance. We take inspiration from BIBREF17 which uses Apache Spark and Hadoop HDFS to forecast stock market trends based on social media sentiment and historical price. Similarly, we leverage the performance of Apache Spark RDDs and the persistence of Apache Hive to build a solution that is fast, accurate and fault-tolerant. To our knowledge KryptoOracle is the first of its kind solution that provides an out of box solution for real-time cryptocurrency price forecasting based on Twitter sentiments while ensuring that the data volume does not become a bottle neck to its performance.
KryptoOracle
KryptoOracle is an engine that aims at predicting the trends of any cryptocurrency based on the sentiment of the crowd. It does so by learning the correlation between the sentiments of relevant tweets and the real time price of the cryptocurrency. The engine bootstraps itself by first learning from the history given to it and starts predicting based on the previous correlation. KryptoOracle is also capable of reinforcing itself by the mistakes it makes and tries to improve itself at prediction. In addition, the engine supports trend visualization over time based on records of both incoming data and intermediate results. This engine has been built keeping in mind the increasing data volume, velocity and variety that has been made available and is therefore able to scale and manage high volumes of heterogeneous data.
KryptoOracle has been built in the Apache ecosystem and uses Apache Spark. Data structures in Spark are based on resilient distributed datasets (RDD), a read only multi-set of data which can be distributed over a cluster of machines and is fault tolerant. Spark applications run as separate processes on different clusters and are coordinated by the Spark object also referred to as the SparkContext. This element is the main driver of the program which connects with the cluster manager and helps acquire executors on different nodes to allocate resource across applications. Spark is highly scalable, being 100x faster than Hadoop on large datasets, and provides out of the box libraries for both streaming and machine learning.
KryptoOracle ::: Architecture
The growth of the volume of data inspired us to opt for a big data architecture which can not only handle the prediction algorithms but also the streaming and increasing volume of data in a fault tolerant way.
Figure FIGREF2 gives an overview of the architecture design. Central to this design is Apache Spark which acts as an in-memory data store and allows us to perform computations in a scalable manner. This data is the input to our machine learning model for making predictions. To bootstrap our model, we first gather a few days of data and store that in Apache Spark RDDs. Next, we perform computations to construct features from the raw data. All these computations are performed on data that is distributed across multiple Spark clusters and therefore will scale as the data grows continuously.
Once the machine learning model has been bootstrapped, we commence data streaming to get real-time data related to both the social media (in our case, Twitter) and the cryptocurrency. Similar computations are performed on this data to calculate the features and then this new data-point is used to get a future prediction from the model. This computed data-point is then appended to the already existing data in Spark RDDs, obtained from the bootstrap data. Therefore, in addition to making predictions we also keep expanding our data store which allows us to extract holistic visualizations from the data regarding the cryptocurrency market trend and how our own predictions capture that. Moreover, as we discuss later the new data-points are also used to retrain our model.
An important property of this architecture is the persistence of the data and the model. The machine learning model persists itself by storing its weights to disk and loading from it while retraining or reinforcing itself to learn from mistakes. The tweets and cryptocurrency training data is also stored in Apache Hive which provides data warehousing support to read, write and manage distributed datasets directly from disk. This persistence technique helps the whole platform to reset itself without omissions in real time.
Spark RDD has the innate capability to recover itself because it stores all execution steps in a lineage graph. In case of any faults in the system, Spark redoes all the previous executions from the built DAG and recovers itself to the previous steady state from any fault such as memory overload. Spark RDDs lie in the core of KryptoOracle and therefore make it easier for it to recover from faults. Moreover, faults like memory overload or system crashes may require for the whole system to hard reboot. However, due to the duplicate copies of the RDDs in Apache Hive and the stored previous state of the machine learning model, KryptoOracle can easily recover to the previous steady state.
KryptoOracle ::: Sentiment Analysis
In KryptoOracle we focus on sentiment analysis on a document level where each tweet is considered as a single document and we intend to determine its sentiment score. In general, there are primarily two main approaches for sentiment analysis: machine learning-based and lexicon-based. Machine learning-based approaches use classification techniques to classify text, while lexicon-based methods use a sentiment dictionary with opinion words and match them with the data to determine polarity. They assign sentiment scores to the opinion words describing how positive or negative the words contained in the dictionary are BIBREF18. Machine learning-based approaches are inherently supervised and require an adequately large training set for the model to learn the differentiating characteristics of the text corpus. In this paper we choose to forego this training aspect in favour of using a lexicon-based approach. This is because our objective is not to innovate in the natural language processing domain but instead to establish a scalable architecture that is able to capture the relationship between social media sources and financial markets, specifically in the context of the cryptocurrency market.
To measure the sentiment of each tweet VADER (Valence Aware Dictionary and sEntiment Reasoner) is used BIBREF19. VADER is a lexicon and rule-based sentiment analysis tool that is specifically attuned to sentiments expressed in social media. When given a text corpus, VADER outputs three valence scores for each sentiment i.e. positive, negative and neutral. A fourth compound score is computed by summing the valence scores of each word in the lexicon, adjusted according to the rules, and then normalized to be between -1 (extreme negative) and +1 (extreme positive). To summarize, it is a normalized, weighted composite score. This is the most useful metric for us since it provides a single uni-dimensional measure of sentiment for a given tweet. Therefore, we capture the sentiment of each tweet using the compound score.
However, this score is not the final metric that we use to build our machine learning model. It is quite intuitive that tweets belonging to influential personalities should be assigned more weight since they will have a more significant impact on the price of any cryptocurrency. To capture this relationship the compound score is multiplied by the poster's follower count, the number of likes on the tweet and the retweet count. The final score is calculated with the following equation:
The +1 to both the RetweetCount and Likes ensures that the final score does not become zero if there are no likes or re-tweets for the tweet in subject. UserFollowerCount does not have +1 to filter out the numerous bots on Twitter which flood crytocurrency forums. We further normalize the score by taking the root of the final score and multiplying by -1 if the score is negative. This final score belongs to a single tweet and since our prediction scope is for a certain time frame, we sum up all the normalized scores for the different tweets received during that time frame. This summed up score is then used as one of the features for our model to predict the cryptocurrency price for the future time frame.
KryptoOracle ::: Machine Learning
An important element of our architecture is the machine learning model, trained to capture the correlation between social media sentiment and a certain metric of the financial market, in our case, the price of cryptocurrency. An essential characteristic of the model is that it should be able to continuously evolve and adjust its weights according to the ever-changing social media sentiments and the volatile cryptocurrency market. We discuss later how we incorporate this in our model design. However, it is worth mentioning that our problem deals with structured data with features related to the social media sentiments and primitive or computed metrics of the cryptocurrency market.
In prediction problems involving unstructured data, ANNs (Artificial Neural Networks) tend to outperform all other algorithms or frameworks. However, when it comes to small-to-medium structured/tabular data like in our case, decision tree based algorithms are currently considered best-in-class. Therefore, we experimented with a few techniques but then ultimately decided to use XGBoost BIBREF20 owing to its speed, performance and the quality of being easily re-trainable. XGBoost is under development and will be released to work in PySpark. Therefore, at this moment we choose to deploy the model outside of our Spark framework. For bootstrapping the model, historical data points are exported outside the Spark framework and used to train the model initially. After this, as new real-time data arrives it is processed to create a new data-point of the required features. This data-point is then also exported outside Spark and fed to the machine learning model to obtain a prediction for the future price.
To continuously improve the model we employ online learning. The model is saved to disk and after every prediction we wait for the actual price value to arrive. This actual price value is then used to retrain the model as shown in Figure FIGREF5, so that it can learn from the error between the value it had predicted earlier and the actual value that arrived later. In this way the model keeps readjusting its weights to stay up to date with the market trends.
Experimental Evaluation
We used PySpark v2.3 in Jupyter notebooks with Python 2.7 kernels to code KryptoOracle. The entire source code was tested on a server instance on the SOSCIP cloud with 32 GB RAM, 8 CPUs and 120 GB HDD running on Ubuntu 18.04 over a period of 30 days. The data extraction and correlation codes were taken from “Correlation of Twitter sentiments with the evolution of cryptocurrencies," which is publicly availableBIBREF14. The data collected for this experiment was for the Bitcoin cryptocurrency.
Experimental Evaluation ::: Data
The data fed into KryptoOracle is primarily of two types, Twitter data which consists of tweets related to the cryptocurrency and the minutely cryptocurrency value.
Twitter data: We used the Twitter API to scrap tweets with hashtags. For instance, for Bitcoin, the #BTC and #Bitcoin tags were used. The Twitter API only allows a maximum of 450 requests per 15 minute and historical data up to 7 days. Throughout our project we collect data for almost 30 days. Bitcoin had about 25000 tweets per day amounting to a total of approximately 10 MB of data daily. For each tweet, the ID, text, username, number of followers, number of retweets, creation date and time was also stored. All non-English tweets were filtered out by the API. We further processed the full tweet text by removing links, images, videos and hashtags to feed in to the algorithm.
Cryptocurrency data: To obtain cryptocurrency data, the Cryptocompare API BIBREF21 was used. It provides a free API that provides the 7 day minutely values of any cryptocurrency. The data has several fields: time, open, close, high and low that correspond to the opening, closing, high and low values of the cryptocurrency in that particular time frame in USD.
After collecting all the data, we aligned all tweets and cryptocurrecy data by defined time windows of one minute and stored the resulting data into a training data RDD. This training data RDD was further processed as described in the later subsections and then fed into the machine learning algorithm. The same API and structure was also used to stream in real time to KryptoOracle.
Experimental Evaluation ::: Procedure and Results
We started by collecting Twitter data with hashtags #Bitcoin and #BTC for a period of 14 days using Twython, a python library which uses Twitter API to extract tweets using relevant queries. The real time price of Bitcoin was also simultaneously collected using the crytocompare API. The Twitter data was cleaned to remove any hashtags, links, images and videos from the tweets. The sentiment score of each tweet was collected to get the scores as described in the previous section.
To analyze the data, we calculated the Spearman and Pearson correlation between the tweet scores and the Bitcoin prices as shown in Figure FIGREF13. The y-axis of the graphs denote the lag in minutes to see if there was any lag between the arrival of tweets and the Bitcoin prices. The trend of the tweet scores and the corresponding Bitcoin prices is captured in Figure FIGREF6. The hourly summed up Twitter sentiments and their corresponding mean bitcoin price for the hour have been plotted in the graph. It can be seen in the figure that some spikes in sentiment scores correspond directly or with some lag with the Bitcoin price. We also noticed that the volume of incoming streaming tweets in the time of a radical change increases, which results in higher cumulative score for the hour.
The bitcoin price and Twitter sentiment features were not enough to predict the next minute price as they did not capture the ongoing trend. It was therefore important that the historical price of the cryptocurrency was also incorporated in the features so as to get a better prediction for the future. We, therefore, performed some time series manipulation to engineer two new features for our model. The first feature was the Previous Close Price that captured the close price of the cryptocurrency in the previous time frame. The next feature was the Moving Average of Close Price. This feature was a rolling average of the last 100 time frame close prices and aimed to capture the pattern with which the price was constrained to change. A similar new third feature called Moving Average of Scores was designed to capture the rolling average of the last 100 scores. This new feature captured the past sentiment information. With these three additional features combined with the final sentiment score computed in the previous subsection, we got the final training data as shown in Figure FIGREF14.
Once the historical data was stored, all information was fed to the machine learning model. In our experiment, we stored historical data for a month but this can be easily extended as per user requirements.
Once the KryptoOracle engine was bootstrapped with historical data, the real time streamer was started. The real-time tweets scores were calculated in the same way as the historical data and summed up for a minute and sent to the machine learning model with the Bitcoin price in the previous minute and the rolling average price. It predicted the next minute's Bitcoin price from the given data. After the actual price arrived, the RMS value was calculated and the machine learning model updated itself to predict with better understanding the next value. All the calculated values were then stored back to the Spark training RDD for storage. The RDD persisted all the data while training and check-pointed itself to the Hive database after certain period of time.
We ran the engine for one day and got an overall root mean square (RMS) error of 10$ between the actual and the predicted price of Bitcoin. The results for RMS values can be seen below.
Figure FIGREF15 shows the RMS error (in USD) for a period of 5 hours at the end of our experiment. The visualization graph at the end of KryptoOracle can be seen in Figure FIGREF12 which captures the actual price of Bitcoin and the predicted price by KryptoOracle over the same period of 5 hours. The graph shows clearly how KryptoOracle has been able to correctly predict the bitcoin price ahead of 1 minute time. The engine clearly learns from the errors it makes and rewires itself to predict in real-time which can be seen from the adaptive nature of the predicted price graph.
Conclusion and Future Work
In this paper, we present a novel big data platform that can learn, predict and update itself in real time. We tested the engine on Twitter sentiments and cryptocurrency prices. We envision that this engine can be generalized to work on any real time changing market trend such as stock prices, loyalty towards product/company or even election results. Sentiments in real world can be extracted from not only tweets but also chats from IRC channels, news and other sources such as images and videos from YouTube or TV channels. This implies that the platform can be customized for tasks where the objective is to make predictions based on social media sentiments. In future, we plan to create a front-end for this system which can be used to visually capture the trend and also show historical aggregated data as per user input. Such a front-end could also allow the time window for prediction to be tweaked to predict prices for further ahead in time.
We understand that crytocurrency prices are influenced by a lot of factors which cannot be captured by Twitter sentiments. Supply and demand of the coin and interest of major investors are two major factors BIBREF22. To capture these factors one has to add more features to the training data with inferences from multiple sources such as news, political reforms and macro-financial external factors such as stocks, gold rates and exchange rates. While we performed our experiments, the crytocurrency values did not go through any major changes and thus this engine also needs to be tested with more adverse fluctuations. One way to capture fluctuations can be to trace back to the features that have gone through the major changes and adaptively assign them more weights while training the machine learning model.
There is also future work related to the machine learning part of the engine. The state of the art time series machine learning algorithms include the modern deep learning algorithms such as RNNs and LSTMs BIBREF23, but unfortunately Spark does not provide deep learning libraries yet. There are some plugins, such as Sparkflow, that facilitate neural network support, but work is also under way to provide Spark with such in-built deep learning support. Currently, Spark also does not have much streaming machine learning support, other than linear regression and linear classification. However, the advent of additional streaming algorithm support in Spark will certainly benefit engines such as KryptoOracle. | Yes |
d3092f78bdbe7e741932e3ddf997e8db42fa044c | d3092f78bdbe7e741932e3ddf997e8db42fa044c_0 | Q: What experimental evaluation is used?
Text: Introduction
A cryptocurrency is a digital currency designed to work as a medium of exchange that uses strong cryptography to secure financial transactions, control the creation of additional units, and verify the transfer of assets. They are based on decentralized systems built on block-chain technology, a distributed ledger enforced by a disparate network of computers BIBREF0. The first decentralized cryptocurrency, Bitcoin, was released as open-source software in 2009. After this release, approximately 4000 altcoins (other cryptocurrencies) have been released. As of August 2019, the total market capitalization of cryptocurrencies is $258 billion, where Bitcoin alone has a market capitalization of $179 billion BIBREF1.
Considering the huge market value of these currencies, they have attracted significant attention, where some people consider them as actual currencies and others as investment opportunities. This has resulted in large fluctuations in their prices. For instance in 2017 the value of Bitcoin increased approximately 2000% from $863 on January 9, 2017 to a high of $17,900 on December 15, 2017. However, eight weeks later, on February 5, 2018, the price had been more than halved to a value of just $6200 BIBREF2.
This high volatility in the value of cryptocurrencies means there is uncertainty for both investors, and for people who intend to use them as an actual currency. Cryptocurrency prices do not behave as traditional currencies and, therefore, it is difficult to determine what leads to this volatility. This in turn makes it a challenge to correctly predict the future prices of any cryptocurrency. To predict these prices, huge heterogeneous data volumes need to be collected from various sources such as blogs, IRC channels and social media. Especially, tweets from highly influential people and mass has significant effects on the price of cryptocurrency BIBREF3. However, tweets need to be filtered and their sentiments need to be calculated in a timely fashion to help predict cryptocurrency prices in real time. Furthermore, real-time prediction also calls for real-time updating of learning algorithms, which introduces an additional difficulty. These challenges call for learning platforms based on big data architectures that can not only handle heterogeneous volumes of data but also be fault tolerant and persistent in real time.
In this paper we provide a novel real-time and adaptive cryptocurrency price prediction platform based on Twitter sentiments. The integrative and modular platform copes with the three aforementioned challenges in several ways. Firstly, it provides a Spark-based architecture which handles the large volume of incoming data in a persistent and fault tolerant way. Secondly, the proposed platform offers an approach that supports sentiment analysis based on VADER which can respond to large amounts of natural language processing queries in real time. Thirdly, the platform supports a predictive approach based on online learning in which a machine learning model adapts its weights to cope with new prices and sentiments. Finally, the platform is modular and integrative in the sense that it combines these different solutions to provide novel real-time tool support for bitcoin price prediction that is more scalable, data-rich, and proactive, and can help accelerate decision-making, uncover new opportunities and provide more timely insights based on the available and ever-larger financial data volume and variety.
The rest of the paper is organized as follows. Section 2 discusses the related work proposed in the literature. Section 3 discusses the design and implementation of KryptoOracle in detail and includes the description of all of its sub-components. Section 4 presents an experimental evaluation, including experimental data, setup and results. Finally, section 5 concludes the paper and describes future work.
Related Work
In this section we present a brief review of the state of the art related to cryptocurrency price prediction. Related works can be divided into three main categories: (i) social media sentiments and financial markets (including cryptocurrency markets); (ii) machine learning for cryptocurrency price prediction; and (iii) big data platforms for financial market prediction.
The `prospect theory' framed by Daniel Kahneman and Amos Tversky presents that financial decisions are significantly influenced by risk and emotions, and not just the value alone BIBREF4. This is further reinforced by other works in economic psychology and decision making such as BIBREF5 which show that variations in feelings that are widely experienced by people, influence investor decision-making and, consequently, lead to predictable patterns in equity pricing. These insights, therefore, open the possibility to leverage techniques such as sentiment analysis to identify patterns that could affect the price of an entity.
Considering the emergence and ubiquity of media, especially social media, further works have explored how it effects user sentiment and therefore financial markets. Paul Tetlock in BIBREF6, explains how high media pessimism predicts downward pressure on market prices, and unusually high or low pessimism predicts high trading volume. Moreover, Gartner found in a study that majority of consumers use social networks to inform buying decisions BIBREF7. This insight has given rise to several research materials which have attempted to find correlations between media sentiments and different financial markets.
The authors in BIBREF8 retrieve, extract, and analyze the effects of news sentiments on the stock market. They develop a sentiment analysis dictionary for the financial sector leading to a dictionary-based sentiment analysis model. With this model trained only on news sentiments, the paper achieved a directional accuracy of 70.59% in predicting the trends in short-term stock price movement. The authors in BIBREF9 use the sentiment of message board comments to predict the stock movement. Unlike other approaches where the overall moods or sentiments are considered, this paper extracts the ‘topic-sentiment’ feature, which represents the sentiments of the specific topics of the company and uses that for stock forecasting. Using this method the accuracy average over 18 stocks in one year transactions, achieved 2.07% better performance than the model using historical prices only. Similarly, Alan Dennis and Lingyao Yuan collected valence scores on tweets about the companies in the S&P 500 and found that they correlated with stock prices BIBREF10. The authors in BIBREF11 used a self-organizing fuzzy neural network, with Twitter mood from sentiment as an input, to predict price changes in the DOW Jones Industrial average and achieved a 86.7% accuracy.
With the recent emergence of cryptocurrencies and the widespread investment in them, has motivated researchers to try to predict their price variations. The authors in BIBREF2 predict price fluctuations for three cryptocurrencies: Bitcoin, Litecoin and Ethereum. The news and social media data was labeled based on actual price changes one day in the future for each coin, rather than on positive or negative sentiment. By taking this approach, the model was able to directly predict price fluctuations instead of needing to first predict sentiment. Logistic regression worked best for Bitcoin predictions and the model was able to predict 43.9% of price increases and 61.9% of price decreases correctly. A work by Abhraham et al. uses Twitter sentiment and google trends data to predict the price of Bitcoin and Ethereum BIBREF12. The paper uses the tweet volume in addition to the Twitter sentiment to establish a correlation with cryptocurrency price.
KryptoOracle draws greatest inspiration from BIBREF13 and BIBREF14. Both works use Twitter sentiments to find correlation with Bitcoin prices. The tweets are cleaned of non-alphanumeric symbols and then processed with VADER (Valence Aware Dictionary and sEntiment Reasoner) to analyze the sentiment of each tweet and classify it as negative, neutral, or positive. The compound sentiment score is then used to establish correlation with the Bitcoin prices over different lag intervals. KryptoOracle builds on what has been discussed above but goes beyond to construct a prediction engine that forecasts Bitcoin prices at specified intervals.
Machine learning has also been employed directly for cryptocurrency price prediction. For instance, the authors in BIBREF15 contribute to the Bitcoin forecasting literature by testing auto-regressive integrated moving average (ARIMA) and neural network auto-regression (NNAR) models to forecast the daily price movement based only on the historical price points. Similarly the author in BIBREF16 presents a Neural Network framework to provide a deep machine learning solution to the cryptocurrency price prediction problem. The framework is realized in three instants with a Multi-layer Perceptron (MLP), a simple Recurrent Neural Network (RNN) and a Long Short-Term Memory (LSTM), which can learn long dependencies. In contrast our prediction model in addition to considering the social media influence, also employs online learning to continuously learn from its mistakes and improve itself in the process.
Since our engine is designed to run for an indefinite amount of time and it continuously obtains real-time data, it is inevitable that this will lead to data storage concerns in the long run. Therefore, we treat our objective as a big data problem and employ big data tools to ensure scalability and performance. We take inspiration from BIBREF17 which uses Apache Spark and Hadoop HDFS to forecast stock market trends based on social media sentiment and historical price. Similarly, we leverage the performance of Apache Spark RDDs and the persistence of Apache Hive to build a solution that is fast, accurate and fault-tolerant. To our knowledge KryptoOracle is the first of its kind solution that provides an out of box solution for real-time cryptocurrency price forecasting based on Twitter sentiments while ensuring that the data volume does not become a bottle neck to its performance.
KryptoOracle
KryptoOracle is an engine that aims at predicting the trends of any cryptocurrency based on the sentiment of the crowd. It does so by learning the correlation between the sentiments of relevant tweets and the real time price of the cryptocurrency. The engine bootstraps itself by first learning from the history given to it and starts predicting based on the previous correlation. KryptoOracle is also capable of reinforcing itself by the mistakes it makes and tries to improve itself at prediction. In addition, the engine supports trend visualization over time based on records of both incoming data and intermediate results. This engine has been built keeping in mind the increasing data volume, velocity and variety that has been made available and is therefore able to scale and manage high volumes of heterogeneous data.
KryptoOracle has been built in the Apache ecosystem and uses Apache Spark. Data structures in Spark are based on resilient distributed datasets (RDD), a read only multi-set of data which can be distributed over a cluster of machines and is fault tolerant. Spark applications run as separate processes on different clusters and are coordinated by the Spark object also referred to as the SparkContext. This element is the main driver of the program which connects with the cluster manager and helps acquire executors on different nodes to allocate resource across applications. Spark is highly scalable, being 100x faster than Hadoop on large datasets, and provides out of the box libraries for both streaming and machine learning.
KryptoOracle ::: Architecture
The growth of the volume of data inspired us to opt for a big data architecture which can not only handle the prediction algorithms but also the streaming and increasing volume of data in a fault tolerant way.
Figure FIGREF2 gives an overview of the architecture design. Central to this design is Apache Spark which acts as an in-memory data store and allows us to perform computations in a scalable manner. This data is the input to our machine learning model for making predictions. To bootstrap our model, we first gather a few days of data and store that in Apache Spark RDDs. Next, we perform computations to construct features from the raw data. All these computations are performed on data that is distributed across multiple Spark clusters and therefore will scale as the data grows continuously.
Once the machine learning model has been bootstrapped, we commence data streaming to get real-time data related to both the social media (in our case, Twitter) and the cryptocurrency. Similar computations are performed on this data to calculate the features and then this new data-point is used to get a future prediction from the model. This computed data-point is then appended to the already existing data in Spark RDDs, obtained from the bootstrap data. Therefore, in addition to making predictions we also keep expanding our data store which allows us to extract holistic visualizations from the data regarding the cryptocurrency market trend and how our own predictions capture that. Moreover, as we discuss later the new data-points are also used to retrain our model.
An important property of this architecture is the persistence of the data and the model. The machine learning model persists itself by storing its weights to disk and loading from it while retraining or reinforcing itself to learn from mistakes. The tweets and cryptocurrency training data is also stored in Apache Hive which provides data warehousing support to read, write and manage distributed datasets directly from disk. This persistence technique helps the whole platform to reset itself without omissions in real time.
Spark RDD has the innate capability to recover itself because it stores all execution steps in a lineage graph. In case of any faults in the system, Spark redoes all the previous executions from the built DAG and recovers itself to the previous steady state from any fault such as memory overload. Spark RDDs lie in the core of KryptoOracle and therefore make it easier for it to recover from faults. Moreover, faults like memory overload or system crashes may require for the whole system to hard reboot. However, due to the duplicate copies of the RDDs in Apache Hive and the stored previous state of the machine learning model, KryptoOracle can easily recover to the previous steady state.
KryptoOracle ::: Sentiment Analysis
In KryptoOracle we focus on sentiment analysis on a document level where each tweet is considered as a single document and we intend to determine its sentiment score. In general, there are primarily two main approaches for sentiment analysis: machine learning-based and lexicon-based. Machine learning-based approaches use classification techniques to classify text, while lexicon-based methods use a sentiment dictionary with opinion words and match them with the data to determine polarity. They assign sentiment scores to the opinion words describing how positive or negative the words contained in the dictionary are BIBREF18. Machine learning-based approaches are inherently supervised and require an adequately large training set for the model to learn the differentiating characteristics of the text corpus. In this paper we choose to forego this training aspect in favour of using a lexicon-based approach. This is because our objective is not to innovate in the natural language processing domain but instead to establish a scalable architecture that is able to capture the relationship between social media sources and financial markets, specifically in the context of the cryptocurrency market.
To measure the sentiment of each tweet VADER (Valence Aware Dictionary and sEntiment Reasoner) is used BIBREF19. VADER is a lexicon and rule-based sentiment analysis tool that is specifically attuned to sentiments expressed in social media. When given a text corpus, VADER outputs three valence scores for each sentiment i.e. positive, negative and neutral. A fourth compound score is computed by summing the valence scores of each word in the lexicon, adjusted according to the rules, and then normalized to be between -1 (extreme negative) and +1 (extreme positive). To summarize, it is a normalized, weighted composite score. This is the most useful metric for us since it provides a single uni-dimensional measure of sentiment for a given tweet. Therefore, we capture the sentiment of each tweet using the compound score.
However, this score is not the final metric that we use to build our machine learning model. It is quite intuitive that tweets belonging to influential personalities should be assigned more weight since they will have a more significant impact on the price of any cryptocurrency. To capture this relationship the compound score is multiplied by the poster's follower count, the number of likes on the tweet and the retweet count. The final score is calculated with the following equation:
The +1 to both the RetweetCount and Likes ensures that the final score does not become zero if there are no likes or re-tweets for the tweet in subject. UserFollowerCount does not have +1 to filter out the numerous bots on Twitter which flood crytocurrency forums. We further normalize the score by taking the root of the final score and multiplying by -1 if the score is negative. This final score belongs to a single tweet and since our prediction scope is for a certain time frame, we sum up all the normalized scores for the different tweets received during that time frame. This summed up score is then used as one of the features for our model to predict the cryptocurrency price for the future time frame.
KryptoOracle ::: Machine Learning
An important element of our architecture is the machine learning model, trained to capture the correlation between social media sentiment and a certain metric of the financial market, in our case, the price of cryptocurrency. An essential characteristic of the model is that it should be able to continuously evolve and adjust its weights according to the ever-changing social media sentiments and the volatile cryptocurrency market. We discuss later how we incorporate this in our model design. However, it is worth mentioning that our problem deals with structured data with features related to the social media sentiments and primitive or computed metrics of the cryptocurrency market.
In prediction problems involving unstructured data, ANNs (Artificial Neural Networks) tend to outperform all other algorithms or frameworks. However, when it comes to small-to-medium structured/tabular data like in our case, decision tree based algorithms are currently considered best-in-class. Therefore, we experimented with a few techniques but then ultimately decided to use XGBoost BIBREF20 owing to its speed, performance and the quality of being easily re-trainable. XGBoost is under development and will be released to work in PySpark. Therefore, at this moment we choose to deploy the model outside of our Spark framework. For bootstrapping the model, historical data points are exported outside the Spark framework and used to train the model initially. After this, as new real-time data arrives it is processed to create a new data-point of the required features. This data-point is then also exported outside Spark and fed to the machine learning model to obtain a prediction for the future price.
To continuously improve the model we employ online learning. The model is saved to disk and after every prediction we wait for the actual price value to arrive. This actual price value is then used to retrain the model as shown in Figure FIGREF5, so that it can learn from the error between the value it had predicted earlier and the actual value that arrived later. In this way the model keeps readjusting its weights to stay up to date with the market trends.
Experimental Evaluation
We used PySpark v2.3 in Jupyter notebooks with Python 2.7 kernels to code KryptoOracle. The entire source code was tested on a server instance on the SOSCIP cloud with 32 GB RAM, 8 CPUs and 120 GB HDD running on Ubuntu 18.04 over a period of 30 days. The data extraction and correlation codes were taken from “Correlation of Twitter sentiments with the evolution of cryptocurrencies," which is publicly availableBIBREF14. The data collected for this experiment was for the Bitcoin cryptocurrency.
Experimental Evaluation ::: Data
The data fed into KryptoOracle is primarily of two types, Twitter data which consists of tweets related to the cryptocurrency and the minutely cryptocurrency value.
Twitter data: We used the Twitter API to scrap tweets with hashtags. For instance, for Bitcoin, the #BTC and #Bitcoin tags were used. The Twitter API only allows a maximum of 450 requests per 15 minute and historical data up to 7 days. Throughout our project we collect data for almost 30 days. Bitcoin had about 25000 tweets per day amounting to a total of approximately 10 MB of data daily. For each tweet, the ID, text, username, number of followers, number of retweets, creation date and time was also stored. All non-English tweets were filtered out by the API. We further processed the full tweet text by removing links, images, videos and hashtags to feed in to the algorithm.
Cryptocurrency data: To obtain cryptocurrency data, the Cryptocompare API BIBREF21 was used. It provides a free API that provides the 7 day minutely values of any cryptocurrency. The data has several fields: time, open, close, high and low that correspond to the opening, closing, high and low values of the cryptocurrency in that particular time frame in USD.
After collecting all the data, we aligned all tweets and cryptocurrecy data by defined time windows of one minute and stored the resulting data into a training data RDD. This training data RDD was further processed as described in the later subsections and then fed into the machine learning algorithm. The same API and structure was also used to stream in real time to KryptoOracle.
Experimental Evaluation ::: Procedure and Results
We started by collecting Twitter data with hashtags #Bitcoin and #BTC for a period of 14 days using Twython, a python library which uses Twitter API to extract tweets using relevant queries. The real time price of Bitcoin was also simultaneously collected using the crytocompare API. The Twitter data was cleaned to remove any hashtags, links, images and videos from the tweets. The sentiment score of each tweet was collected to get the scores as described in the previous section.
To analyze the data, we calculated the Spearman and Pearson correlation between the tweet scores and the Bitcoin prices as shown in Figure FIGREF13. The y-axis of the graphs denote the lag in minutes to see if there was any lag between the arrival of tweets and the Bitcoin prices. The trend of the tweet scores and the corresponding Bitcoin prices is captured in Figure FIGREF6. The hourly summed up Twitter sentiments and their corresponding mean bitcoin price for the hour have been plotted in the graph. It can be seen in the figure that some spikes in sentiment scores correspond directly or with some lag with the Bitcoin price. We also noticed that the volume of incoming streaming tweets in the time of a radical change increases, which results in higher cumulative score for the hour.
The bitcoin price and Twitter sentiment features were not enough to predict the next minute price as they did not capture the ongoing trend. It was therefore important that the historical price of the cryptocurrency was also incorporated in the features so as to get a better prediction for the future. We, therefore, performed some time series manipulation to engineer two new features for our model. The first feature was the Previous Close Price that captured the close price of the cryptocurrency in the previous time frame. The next feature was the Moving Average of Close Price. This feature was a rolling average of the last 100 time frame close prices and aimed to capture the pattern with which the price was constrained to change. A similar new third feature called Moving Average of Scores was designed to capture the rolling average of the last 100 scores. This new feature captured the past sentiment information. With these three additional features combined with the final sentiment score computed in the previous subsection, we got the final training data as shown in Figure FIGREF14.
Once the historical data was stored, all information was fed to the machine learning model. In our experiment, we stored historical data for a month but this can be easily extended as per user requirements.
Once the KryptoOracle engine was bootstrapped with historical data, the real time streamer was started. The real-time tweets scores were calculated in the same way as the historical data and summed up for a minute and sent to the machine learning model with the Bitcoin price in the previous minute and the rolling average price. It predicted the next minute's Bitcoin price from the given data. After the actual price arrived, the RMS value was calculated and the machine learning model updated itself to predict with better understanding the next value. All the calculated values were then stored back to the Spark training RDD for storage. The RDD persisted all the data while training and check-pointed itself to the Hive database after certain period of time.
We ran the engine for one day and got an overall root mean square (RMS) error of 10$ between the actual and the predicted price of Bitcoin. The results for RMS values can be seen below.
Figure FIGREF15 shows the RMS error (in USD) for a period of 5 hours at the end of our experiment. The visualization graph at the end of KryptoOracle can be seen in Figure FIGREF12 which captures the actual price of Bitcoin and the predicted price by KryptoOracle over the same period of 5 hours. The graph shows clearly how KryptoOracle has been able to correctly predict the bitcoin price ahead of 1 minute time. The engine clearly learns from the errors it makes and rewires itself to predict in real-time which can be seen from the adaptive nature of the predicted price graph.
Conclusion and Future Work
In this paper, we present a novel big data platform that can learn, predict and update itself in real time. We tested the engine on Twitter sentiments and cryptocurrency prices. We envision that this engine can be generalized to work on any real time changing market trend such as stock prices, loyalty towards product/company or even election results. Sentiments in real world can be extracted from not only tweets but also chats from IRC channels, news and other sources such as images and videos from YouTube or TV channels. This implies that the platform can be customized for tasks where the objective is to make predictions based on social media sentiments. In future, we plan to create a front-end for this system which can be used to visually capture the trend and also show historical aggregated data as per user input. Such a front-end could also allow the time window for prediction to be tweaked to predict prices for further ahead in time.
We understand that crytocurrency prices are influenced by a lot of factors which cannot be captured by Twitter sentiments. Supply and demand of the coin and interest of major investors are two major factors BIBREF22. To capture these factors one has to add more features to the training data with inferences from multiple sources such as news, political reforms and macro-financial external factors such as stocks, gold rates and exchange rates. While we performed our experiments, the crytocurrency values did not go through any major changes and thus this engine also needs to be tested with more adverse fluctuations. One way to capture fluctuations can be to trace back to the features that have gone through the major changes and adaptively assign them more weights while training the machine learning model.
There is also future work related to the machine learning part of the engine. The state of the art time series machine learning algorithms include the modern deep learning algorithms such as RNNs and LSTMs BIBREF23, but unfortunately Spark does not provide deep learning libraries yet. There are some plugins, such as Sparkflow, that facilitate neural network support, but work is also under way to provide Spark with such in-built deep learning support. Currently, Spark also does not have much streaming machine learning support, other than linear regression and linear classification. However, the advent of additional streaming algorithm support in Spark will certainly benefit engines such as KryptoOracle. | root mean square error between the actual and the predicted price of Bitcoin for every minute |
e2427f182d7cda24eb7197f7998a02bc80550f15 | e2427f182d7cda24eb7197f7998a02bc80550f15_0 | Q: How is the architecture fault-tolerant?
Text: Introduction
A cryptocurrency is a digital currency designed to work as a medium of exchange that uses strong cryptography to secure financial transactions, control the creation of additional units, and verify the transfer of assets. They are based on decentralized systems built on block-chain technology, a distributed ledger enforced by a disparate network of computers BIBREF0. The first decentralized cryptocurrency, Bitcoin, was released as open-source software in 2009. After this release, approximately 4000 altcoins (other cryptocurrencies) have been released. As of August 2019, the total market capitalization of cryptocurrencies is $258 billion, where Bitcoin alone has a market capitalization of $179 billion BIBREF1.
Considering the huge market value of these currencies, they have attracted significant attention, where some people consider them as actual currencies and others as investment opportunities. This has resulted in large fluctuations in their prices. For instance in 2017 the value of Bitcoin increased approximately 2000% from $863 on January 9, 2017 to a high of $17,900 on December 15, 2017. However, eight weeks later, on February 5, 2018, the price had been more than halved to a value of just $6200 BIBREF2.
This high volatility in the value of cryptocurrencies means there is uncertainty for both investors, and for people who intend to use them as an actual currency. Cryptocurrency prices do not behave as traditional currencies and, therefore, it is difficult to determine what leads to this volatility. This in turn makes it a challenge to correctly predict the future prices of any cryptocurrency. To predict these prices, huge heterogeneous data volumes need to be collected from various sources such as blogs, IRC channels and social media. Especially, tweets from highly influential people and mass has significant effects on the price of cryptocurrency BIBREF3. However, tweets need to be filtered and their sentiments need to be calculated in a timely fashion to help predict cryptocurrency prices in real time. Furthermore, real-time prediction also calls for real-time updating of learning algorithms, which introduces an additional difficulty. These challenges call for learning platforms based on big data architectures that can not only handle heterogeneous volumes of data but also be fault tolerant and persistent in real time.
In this paper we provide a novel real-time and adaptive cryptocurrency price prediction platform based on Twitter sentiments. The integrative and modular platform copes with the three aforementioned challenges in several ways. Firstly, it provides a Spark-based architecture which handles the large volume of incoming data in a persistent and fault tolerant way. Secondly, the proposed platform offers an approach that supports sentiment analysis based on VADER which can respond to large amounts of natural language processing queries in real time. Thirdly, the platform supports a predictive approach based on online learning in which a machine learning model adapts its weights to cope with new prices and sentiments. Finally, the platform is modular and integrative in the sense that it combines these different solutions to provide novel real-time tool support for bitcoin price prediction that is more scalable, data-rich, and proactive, and can help accelerate decision-making, uncover new opportunities and provide more timely insights based on the available and ever-larger financial data volume and variety.
The rest of the paper is organized as follows. Section 2 discusses the related work proposed in the literature. Section 3 discusses the design and implementation of KryptoOracle in detail and includes the description of all of its sub-components. Section 4 presents an experimental evaluation, including experimental data, setup and results. Finally, section 5 concludes the paper and describes future work.
Related Work
In this section we present a brief review of the state of the art related to cryptocurrency price prediction. Related works can be divided into three main categories: (i) social media sentiments and financial markets (including cryptocurrency markets); (ii) machine learning for cryptocurrency price prediction; and (iii) big data platforms for financial market prediction.
The `prospect theory' framed by Daniel Kahneman and Amos Tversky presents that financial decisions are significantly influenced by risk and emotions, and not just the value alone BIBREF4. This is further reinforced by other works in economic psychology and decision making such as BIBREF5 which show that variations in feelings that are widely experienced by people, influence investor decision-making and, consequently, lead to predictable patterns in equity pricing. These insights, therefore, open the possibility to leverage techniques such as sentiment analysis to identify patterns that could affect the price of an entity.
Considering the emergence and ubiquity of media, especially social media, further works have explored how it effects user sentiment and therefore financial markets. Paul Tetlock in BIBREF6, explains how high media pessimism predicts downward pressure on market prices, and unusually high or low pessimism predicts high trading volume. Moreover, Gartner found in a study that majority of consumers use social networks to inform buying decisions BIBREF7. This insight has given rise to several research materials which have attempted to find correlations between media sentiments and different financial markets.
The authors in BIBREF8 retrieve, extract, and analyze the effects of news sentiments on the stock market. They develop a sentiment analysis dictionary for the financial sector leading to a dictionary-based sentiment analysis model. With this model trained only on news sentiments, the paper achieved a directional accuracy of 70.59% in predicting the trends in short-term stock price movement. The authors in BIBREF9 use the sentiment of message board comments to predict the stock movement. Unlike other approaches where the overall moods or sentiments are considered, this paper extracts the ‘topic-sentiment’ feature, which represents the sentiments of the specific topics of the company and uses that for stock forecasting. Using this method the accuracy average over 18 stocks in one year transactions, achieved 2.07% better performance than the model using historical prices only. Similarly, Alan Dennis and Lingyao Yuan collected valence scores on tweets about the companies in the S&P 500 and found that they correlated with stock prices BIBREF10. The authors in BIBREF11 used a self-organizing fuzzy neural network, with Twitter mood from sentiment as an input, to predict price changes in the DOW Jones Industrial average and achieved a 86.7% accuracy.
With the recent emergence of cryptocurrencies and the widespread investment in them, has motivated researchers to try to predict their price variations. The authors in BIBREF2 predict price fluctuations for three cryptocurrencies: Bitcoin, Litecoin and Ethereum. The news and social media data was labeled based on actual price changes one day in the future for each coin, rather than on positive or negative sentiment. By taking this approach, the model was able to directly predict price fluctuations instead of needing to first predict sentiment. Logistic regression worked best for Bitcoin predictions and the model was able to predict 43.9% of price increases and 61.9% of price decreases correctly. A work by Abhraham et al. uses Twitter sentiment and google trends data to predict the price of Bitcoin and Ethereum BIBREF12. The paper uses the tweet volume in addition to the Twitter sentiment to establish a correlation with cryptocurrency price.
KryptoOracle draws greatest inspiration from BIBREF13 and BIBREF14. Both works use Twitter sentiments to find correlation with Bitcoin prices. The tweets are cleaned of non-alphanumeric symbols and then processed with VADER (Valence Aware Dictionary and sEntiment Reasoner) to analyze the sentiment of each tweet and classify it as negative, neutral, or positive. The compound sentiment score is then used to establish correlation with the Bitcoin prices over different lag intervals. KryptoOracle builds on what has been discussed above but goes beyond to construct a prediction engine that forecasts Bitcoin prices at specified intervals.
Machine learning has also been employed directly for cryptocurrency price prediction. For instance, the authors in BIBREF15 contribute to the Bitcoin forecasting literature by testing auto-regressive integrated moving average (ARIMA) and neural network auto-regression (NNAR) models to forecast the daily price movement based only on the historical price points. Similarly the author in BIBREF16 presents a Neural Network framework to provide a deep machine learning solution to the cryptocurrency price prediction problem. The framework is realized in three instants with a Multi-layer Perceptron (MLP), a simple Recurrent Neural Network (RNN) and a Long Short-Term Memory (LSTM), which can learn long dependencies. In contrast our prediction model in addition to considering the social media influence, also employs online learning to continuously learn from its mistakes and improve itself in the process.
Since our engine is designed to run for an indefinite amount of time and it continuously obtains real-time data, it is inevitable that this will lead to data storage concerns in the long run. Therefore, we treat our objective as a big data problem and employ big data tools to ensure scalability and performance. We take inspiration from BIBREF17 which uses Apache Spark and Hadoop HDFS to forecast stock market trends based on social media sentiment and historical price. Similarly, we leverage the performance of Apache Spark RDDs and the persistence of Apache Hive to build a solution that is fast, accurate and fault-tolerant. To our knowledge KryptoOracle is the first of its kind solution that provides an out of box solution for real-time cryptocurrency price forecasting based on Twitter sentiments while ensuring that the data volume does not become a bottle neck to its performance.
KryptoOracle
KryptoOracle is an engine that aims at predicting the trends of any cryptocurrency based on the sentiment of the crowd. It does so by learning the correlation between the sentiments of relevant tweets and the real time price of the cryptocurrency. The engine bootstraps itself by first learning from the history given to it and starts predicting based on the previous correlation. KryptoOracle is also capable of reinforcing itself by the mistakes it makes and tries to improve itself at prediction. In addition, the engine supports trend visualization over time based on records of both incoming data and intermediate results. This engine has been built keeping in mind the increasing data volume, velocity and variety that has been made available and is therefore able to scale and manage high volumes of heterogeneous data.
KryptoOracle has been built in the Apache ecosystem and uses Apache Spark. Data structures in Spark are based on resilient distributed datasets (RDD), a read only multi-set of data which can be distributed over a cluster of machines and is fault tolerant. Spark applications run as separate processes on different clusters and are coordinated by the Spark object also referred to as the SparkContext. This element is the main driver of the program which connects with the cluster manager and helps acquire executors on different nodes to allocate resource across applications. Spark is highly scalable, being 100x faster than Hadoop on large datasets, and provides out of the box libraries for both streaming and machine learning.
KryptoOracle ::: Architecture
The growth of the volume of data inspired us to opt for a big data architecture which can not only handle the prediction algorithms but also the streaming and increasing volume of data in a fault tolerant way.
Figure FIGREF2 gives an overview of the architecture design. Central to this design is Apache Spark which acts as an in-memory data store and allows us to perform computations in a scalable manner. This data is the input to our machine learning model for making predictions. To bootstrap our model, we first gather a few days of data and store that in Apache Spark RDDs. Next, we perform computations to construct features from the raw data. All these computations are performed on data that is distributed across multiple Spark clusters and therefore will scale as the data grows continuously.
Once the machine learning model has been bootstrapped, we commence data streaming to get real-time data related to both the social media (in our case, Twitter) and the cryptocurrency. Similar computations are performed on this data to calculate the features and then this new data-point is used to get a future prediction from the model. This computed data-point is then appended to the already existing data in Spark RDDs, obtained from the bootstrap data. Therefore, in addition to making predictions we also keep expanding our data store which allows us to extract holistic visualizations from the data regarding the cryptocurrency market trend and how our own predictions capture that. Moreover, as we discuss later the new data-points are also used to retrain our model.
An important property of this architecture is the persistence of the data and the model. The machine learning model persists itself by storing its weights to disk and loading from it while retraining or reinforcing itself to learn from mistakes. The tweets and cryptocurrency training data is also stored in Apache Hive which provides data warehousing support to read, write and manage distributed datasets directly from disk. This persistence technique helps the whole platform to reset itself without omissions in real time.
Spark RDD has the innate capability to recover itself because it stores all execution steps in a lineage graph. In case of any faults in the system, Spark redoes all the previous executions from the built DAG and recovers itself to the previous steady state from any fault such as memory overload. Spark RDDs lie in the core of KryptoOracle and therefore make it easier for it to recover from faults. Moreover, faults like memory overload or system crashes may require for the whole system to hard reboot. However, due to the duplicate copies of the RDDs in Apache Hive and the stored previous state of the machine learning model, KryptoOracle can easily recover to the previous steady state.
KryptoOracle ::: Sentiment Analysis
In KryptoOracle we focus on sentiment analysis on a document level where each tweet is considered as a single document and we intend to determine its sentiment score. In general, there are primarily two main approaches for sentiment analysis: machine learning-based and lexicon-based. Machine learning-based approaches use classification techniques to classify text, while lexicon-based methods use a sentiment dictionary with opinion words and match them with the data to determine polarity. They assign sentiment scores to the opinion words describing how positive or negative the words contained in the dictionary are BIBREF18. Machine learning-based approaches are inherently supervised and require an adequately large training set for the model to learn the differentiating characteristics of the text corpus. In this paper we choose to forego this training aspect in favour of using a lexicon-based approach. This is because our objective is not to innovate in the natural language processing domain but instead to establish a scalable architecture that is able to capture the relationship between social media sources and financial markets, specifically in the context of the cryptocurrency market.
To measure the sentiment of each tweet VADER (Valence Aware Dictionary and sEntiment Reasoner) is used BIBREF19. VADER is a lexicon and rule-based sentiment analysis tool that is specifically attuned to sentiments expressed in social media. When given a text corpus, VADER outputs three valence scores for each sentiment i.e. positive, negative and neutral. A fourth compound score is computed by summing the valence scores of each word in the lexicon, adjusted according to the rules, and then normalized to be between -1 (extreme negative) and +1 (extreme positive). To summarize, it is a normalized, weighted composite score. This is the most useful metric for us since it provides a single uni-dimensional measure of sentiment for a given tweet. Therefore, we capture the sentiment of each tweet using the compound score.
However, this score is not the final metric that we use to build our machine learning model. It is quite intuitive that tweets belonging to influential personalities should be assigned more weight since they will have a more significant impact on the price of any cryptocurrency. To capture this relationship the compound score is multiplied by the poster's follower count, the number of likes on the tweet and the retweet count. The final score is calculated with the following equation:
The +1 to both the RetweetCount and Likes ensures that the final score does not become zero if there are no likes or re-tweets for the tweet in subject. UserFollowerCount does not have +1 to filter out the numerous bots on Twitter which flood crytocurrency forums. We further normalize the score by taking the root of the final score and multiplying by -1 if the score is negative. This final score belongs to a single tweet and since our prediction scope is for a certain time frame, we sum up all the normalized scores for the different tweets received during that time frame. This summed up score is then used as one of the features for our model to predict the cryptocurrency price for the future time frame.
KryptoOracle ::: Machine Learning
An important element of our architecture is the machine learning model, trained to capture the correlation between social media sentiment and a certain metric of the financial market, in our case, the price of cryptocurrency. An essential characteristic of the model is that it should be able to continuously evolve and adjust its weights according to the ever-changing social media sentiments and the volatile cryptocurrency market. We discuss later how we incorporate this in our model design. However, it is worth mentioning that our problem deals with structured data with features related to the social media sentiments and primitive or computed metrics of the cryptocurrency market.
In prediction problems involving unstructured data, ANNs (Artificial Neural Networks) tend to outperform all other algorithms or frameworks. However, when it comes to small-to-medium structured/tabular data like in our case, decision tree based algorithms are currently considered best-in-class. Therefore, we experimented with a few techniques but then ultimately decided to use XGBoost BIBREF20 owing to its speed, performance and the quality of being easily re-trainable. XGBoost is under development and will be released to work in PySpark. Therefore, at this moment we choose to deploy the model outside of our Spark framework. For bootstrapping the model, historical data points are exported outside the Spark framework and used to train the model initially. After this, as new real-time data arrives it is processed to create a new data-point of the required features. This data-point is then also exported outside Spark and fed to the machine learning model to obtain a prediction for the future price.
To continuously improve the model we employ online learning. The model is saved to disk and after every prediction we wait for the actual price value to arrive. This actual price value is then used to retrain the model as shown in Figure FIGREF5, so that it can learn from the error between the value it had predicted earlier and the actual value that arrived later. In this way the model keeps readjusting its weights to stay up to date with the market trends.
Experimental Evaluation
We used PySpark v2.3 in Jupyter notebooks with Python 2.7 kernels to code KryptoOracle. The entire source code was tested on a server instance on the SOSCIP cloud with 32 GB RAM, 8 CPUs and 120 GB HDD running on Ubuntu 18.04 over a period of 30 days. The data extraction and correlation codes were taken from “Correlation of Twitter sentiments with the evolution of cryptocurrencies," which is publicly availableBIBREF14. The data collected for this experiment was for the Bitcoin cryptocurrency.
Experimental Evaluation ::: Data
The data fed into KryptoOracle is primarily of two types, Twitter data which consists of tweets related to the cryptocurrency and the minutely cryptocurrency value.
Twitter data: We used the Twitter API to scrap tweets with hashtags. For instance, for Bitcoin, the #BTC and #Bitcoin tags were used. The Twitter API only allows a maximum of 450 requests per 15 minute and historical data up to 7 days. Throughout our project we collect data for almost 30 days. Bitcoin had about 25000 tweets per day amounting to a total of approximately 10 MB of data daily. For each tweet, the ID, text, username, number of followers, number of retweets, creation date and time was also stored. All non-English tweets were filtered out by the API. We further processed the full tweet text by removing links, images, videos and hashtags to feed in to the algorithm.
Cryptocurrency data: To obtain cryptocurrency data, the Cryptocompare API BIBREF21 was used. It provides a free API that provides the 7 day minutely values of any cryptocurrency. The data has several fields: time, open, close, high and low that correspond to the opening, closing, high and low values of the cryptocurrency in that particular time frame in USD.
After collecting all the data, we aligned all tweets and cryptocurrecy data by defined time windows of one minute and stored the resulting data into a training data RDD. This training data RDD was further processed as described in the later subsections and then fed into the machine learning algorithm. The same API and structure was also used to stream in real time to KryptoOracle.
Experimental Evaluation ::: Procedure and Results
We started by collecting Twitter data with hashtags #Bitcoin and #BTC for a period of 14 days using Twython, a python library which uses Twitter API to extract tweets using relevant queries. The real time price of Bitcoin was also simultaneously collected using the crytocompare API. The Twitter data was cleaned to remove any hashtags, links, images and videos from the tweets. The sentiment score of each tweet was collected to get the scores as described in the previous section.
To analyze the data, we calculated the Spearman and Pearson correlation between the tweet scores and the Bitcoin prices as shown in Figure FIGREF13. The y-axis of the graphs denote the lag in minutes to see if there was any lag between the arrival of tweets and the Bitcoin prices. The trend of the tweet scores and the corresponding Bitcoin prices is captured in Figure FIGREF6. The hourly summed up Twitter sentiments and their corresponding mean bitcoin price for the hour have been plotted in the graph. It can be seen in the figure that some spikes in sentiment scores correspond directly or with some lag with the Bitcoin price. We also noticed that the volume of incoming streaming tweets in the time of a radical change increases, which results in higher cumulative score for the hour.
The bitcoin price and Twitter sentiment features were not enough to predict the next minute price as they did not capture the ongoing trend. It was therefore important that the historical price of the cryptocurrency was also incorporated in the features so as to get a better prediction for the future. We, therefore, performed some time series manipulation to engineer two new features for our model. The first feature was the Previous Close Price that captured the close price of the cryptocurrency in the previous time frame. The next feature was the Moving Average of Close Price. This feature was a rolling average of the last 100 time frame close prices and aimed to capture the pattern with which the price was constrained to change. A similar new third feature called Moving Average of Scores was designed to capture the rolling average of the last 100 scores. This new feature captured the past sentiment information. With these three additional features combined with the final sentiment score computed in the previous subsection, we got the final training data as shown in Figure FIGREF14.
Once the historical data was stored, all information was fed to the machine learning model. In our experiment, we stored historical data for a month but this can be easily extended as per user requirements.
Once the KryptoOracle engine was bootstrapped with historical data, the real time streamer was started. The real-time tweets scores were calculated in the same way as the historical data and summed up for a minute and sent to the machine learning model with the Bitcoin price in the previous minute and the rolling average price. It predicted the next minute's Bitcoin price from the given data. After the actual price arrived, the RMS value was calculated and the machine learning model updated itself to predict with better understanding the next value. All the calculated values were then stored back to the Spark training RDD for storage. The RDD persisted all the data while training and check-pointed itself to the Hive database after certain period of time.
We ran the engine for one day and got an overall root mean square (RMS) error of 10$ between the actual and the predicted price of Bitcoin. The results for RMS values can be seen below.
Figure FIGREF15 shows the RMS error (in USD) for a period of 5 hours at the end of our experiment. The visualization graph at the end of KryptoOracle can be seen in Figure FIGREF12 which captures the actual price of Bitcoin and the predicted price by KryptoOracle over the same period of 5 hours. The graph shows clearly how KryptoOracle has been able to correctly predict the bitcoin price ahead of 1 minute time. The engine clearly learns from the errors it makes and rewires itself to predict in real-time which can be seen from the adaptive nature of the predicted price graph.
Conclusion and Future Work
In this paper, we present a novel big data platform that can learn, predict and update itself in real time. We tested the engine on Twitter sentiments and cryptocurrency prices. We envision that this engine can be generalized to work on any real time changing market trend such as stock prices, loyalty towards product/company or even election results. Sentiments in real world can be extracted from not only tweets but also chats from IRC channels, news and other sources such as images and videos from YouTube or TV channels. This implies that the platform can be customized for tasks where the objective is to make predictions based on social media sentiments. In future, we plan to create a front-end for this system which can be used to visually capture the trend and also show historical aggregated data as per user input. Such a front-end could also allow the time window for prediction to be tweaked to predict prices for further ahead in time.
We understand that crytocurrency prices are influenced by a lot of factors which cannot be captured by Twitter sentiments. Supply and demand of the coin and interest of major investors are two major factors BIBREF22. To capture these factors one has to add more features to the training data with inferences from multiple sources such as news, political reforms and macro-financial external factors such as stocks, gold rates and exchange rates. While we performed our experiments, the crytocurrency values did not go through any major changes and thus this engine also needs to be tested with more adverse fluctuations. One way to capture fluctuations can be to trace back to the features that have gone through the major changes and adaptively assign them more weights while training the machine learning model.
There is also future work related to the machine learning part of the engine. The state of the art time series machine learning algorithms include the modern deep learning algorithms such as RNNs and LSTMs BIBREF23, but unfortunately Spark does not provide deep learning libraries yet. There are some plugins, such as Sparkflow, that facilitate neural network support, but work is also under way to provide Spark with such in-built deep learning support. Currently, Spark also does not have much streaming machine learning support, other than linear regression and linear classification. However, the advent of additional streaming algorithm support in Spark will certainly benefit engines such as KryptoOracle. | By using Apache Spark which stores all executions in a lineage graph and recovers to the previous steady state from any fault |
0457242fb2ec33446799de229ff37eaad9932f2a | 0457242fb2ec33446799de229ff37eaad9932f2a_0 | Q: Which elements of the platform are modular?
Text: Introduction
A cryptocurrency is a digital currency designed to work as a medium of exchange that uses strong cryptography to secure financial transactions, control the creation of additional units, and verify the transfer of assets. They are based on decentralized systems built on block-chain technology, a distributed ledger enforced by a disparate network of computers BIBREF0. The first decentralized cryptocurrency, Bitcoin, was released as open-source software in 2009. After this release, approximately 4000 altcoins (other cryptocurrencies) have been released. As of August 2019, the total market capitalization of cryptocurrencies is $258 billion, where Bitcoin alone has a market capitalization of $179 billion BIBREF1.
Considering the huge market value of these currencies, they have attracted significant attention, where some people consider them as actual currencies and others as investment opportunities. This has resulted in large fluctuations in their prices. For instance in 2017 the value of Bitcoin increased approximately 2000% from $863 on January 9, 2017 to a high of $17,900 on December 15, 2017. However, eight weeks later, on February 5, 2018, the price had been more than halved to a value of just $6200 BIBREF2.
This high volatility in the value of cryptocurrencies means there is uncertainty for both investors, and for people who intend to use them as an actual currency. Cryptocurrency prices do not behave as traditional currencies and, therefore, it is difficult to determine what leads to this volatility. This in turn makes it a challenge to correctly predict the future prices of any cryptocurrency. To predict these prices, huge heterogeneous data volumes need to be collected from various sources such as blogs, IRC channels and social media. Especially, tweets from highly influential people and mass has significant effects on the price of cryptocurrency BIBREF3. However, tweets need to be filtered and their sentiments need to be calculated in a timely fashion to help predict cryptocurrency prices in real time. Furthermore, real-time prediction also calls for real-time updating of learning algorithms, which introduces an additional difficulty. These challenges call for learning platforms based on big data architectures that can not only handle heterogeneous volumes of data but also be fault tolerant and persistent in real time.
In this paper we provide a novel real-time and adaptive cryptocurrency price prediction platform based on Twitter sentiments. The integrative and modular platform copes with the three aforementioned challenges in several ways. Firstly, it provides a Spark-based architecture which handles the large volume of incoming data in a persistent and fault tolerant way. Secondly, the proposed platform offers an approach that supports sentiment analysis based on VADER which can respond to large amounts of natural language processing queries in real time. Thirdly, the platform supports a predictive approach based on online learning in which a machine learning model adapts its weights to cope with new prices and sentiments. Finally, the platform is modular and integrative in the sense that it combines these different solutions to provide novel real-time tool support for bitcoin price prediction that is more scalable, data-rich, and proactive, and can help accelerate decision-making, uncover new opportunities and provide more timely insights based on the available and ever-larger financial data volume and variety.
The rest of the paper is organized as follows. Section 2 discusses the related work proposed in the literature. Section 3 discusses the design and implementation of KryptoOracle in detail and includes the description of all of its sub-components. Section 4 presents an experimental evaluation, including experimental data, setup and results. Finally, section 5 concludes the paper and describes future work.
Related Work
In this section we present a brief review of the state of the art related to cryptocurrency price prediction. Related works can be divided into three main categories: (i) social media sentiments and financial markets (including cryptocurrency markets); (ii) machine learning for cryptocurrency price prediction; and (iii) big data platforms for financial market prediction.
The `prospect theory' framed by Daniel Kahneman and Amos Tversky presents that financial decisions are significantly influenced by risk and emotions, and not just the value alone BIBREF4. This is further reinforced by other works in economic psychology and decision making such as BIBREF5 which show that variations in feelings that are widely experienced by people, influence investor decision-making and, consequently, lead to predictable patterns in equity pricing. These insights, therefore, open the possibility to leverage techniques such as sentiment analysis to identify patterns that could affect the price of an entity.
Considering the emergence and ubiquity of media, especially social media, further works have explored how it effects user sentiment and therefore financial markets. Paul Tetlock in BIBREF6, explains how high media pessimism predicts downward pressure on market prices, and unusually high or low pessimism predicts high trading volume. Moreover, Gartner found in a study that majority of consumers use social networks to inform buying decisions BIBREF7. This insight has given rise to several research materials which have attempted to find correlations between media sentiments and different financial markets.
The authors in BIBREF8 retrieve, extract, and analyze the effects of news sentiments on the stock market. They develop a sentiment analysis dictionary for the financial sector leading to a dictionary-based sentiment analysis model. With this model trained only on news sentiments, the paper achieved a directional accuracy of 70.59% in predicting the trends in short-term stock price movement. The authors in BIBREF9 use the sentiment of message board comments to predict the stock movement. Unlike other approaches where the overall moods or sentiments are considered, this paper extracts the ‘topic-sentiment’ feature, which represents the sentiments of the specific topics of the company and uses that for stock forecasting. Using this method the accuracy average over 18 stocks in one year transactions, achieved 2.07% better performance than the model using historical prices only. Similarly, Alan Dennis and Lingyao Yuan collected valence scores on tweets about the companies in the S&P 500 and found that they correlated with stock prices BIBREF10. The authors in BIBREF11 used a self-organizing fuzzy neural network, with Twitter mood from sentiment as an input, to predict price changes in the DOW Jones Industrial average and achieved a 86.7% accuracy.
With the recent emergence of cryptocurrencies and the widespread investment in them, has motivated researchers to try to predict their price variations. The authors in BIBREF2 predict price fluctuations for three cryptocurrencies: Bitcoin, Litecoin and Ethereum. The news and social media data was labeled based on actual price changes one day in the future for each coin, rather than on positive or negative sentiment. By taking this approach, the model was able to directly predict price fluctuations instead of needing to first predict sentiment. Logistic regression worked best for Bitcoin predictions and the model was able to predict 43.9% of price increases and 61.9% of price decreases correctly. A work by Abhraham et al. uses Twitter sentiment and google trends data to predict the price of Bitcoin and Ethereum BIBREF12. The paper uses the tweet volume in addition to the Twitter sentiment to establish a correlation with cryptocurrency price.
KryptoOracle draws greatest inspiration from BIBREF13 and BIBREF14. Both works use Twitter sentiments to find correlation with Bitcoin prices. The tweets are cleaned of non-alphanumeric symbols and then processed with VADER (Valence Aware Dictionary and sEntiment Reasoner) to analyze the sentiment of each tweet and classify it as negative, neutral, or positive. The compound sentiment score is then used to establish correlation with the Bitcoin prices over different lag intervals. KryptoOracle builds on what has been discussed above but goes beyond to construct a prediction engine that forecasts Bitcoin prices at specified intervals.
Machine learning has also been employed directly for cryptocurrency price prediction. For instance, the authors in BIBREF15 contribute to the Bitcoin forecasting literature by testing auto-regressive integrated moving average (ARIMA) and neural network auto-regression (NNAR) models to forecast the daily price movement based only on the historical price points. Similarly the author in BIBREF16 presents a Neural Network framework to provide a deep machine learning solution to the cryptocurrency price prediction problem. The framework is realized in three instants with a Multi-layer Perceptron (MLP), a simple Recurrent Neural Network (RNN) and a Long Short-Term Memory (LSTM), which can learn long dependencies. In contrast our prediction model in addition to considering the social media influence, also employs online learning to continuously learn from its mistakes and improve itself in the process.
Since our engine is designed to run for an indefinite amount of time and it continuously obtains real-time data, it is inevitable that this will lead to data storage concerns in the long run. Therefore, we treat our objective as a big data problem and employ big data tools to ensure scalability and performance. We take inspiration from BIBREF17 which uses Apache Spark and Hadoop HDFS to forecast stock market trends based on social media sentiment and historical price. Similarly, we leverage the performance of Apache Spark RDDs and the persistence of Apache Hive to build a solution that is fast, accurate and fault-tolerant. To our knowledge KryptoOracle is the first of its kind solution that provides an out of box solution for real-time cryptocurrency price forecasting based on Twitter sentiments while ensuring that the data volume does not become a bottle neck to its performance.
KryptoOracle
KryptoOracle is an engine that aims at predicting the trends of any cryptocurrency based on the sentiment of the crowd. It does so by learning the correlation between the sentiments of relevant tweets and the real time price of the cryptocurrency. The engine bootstraps itself by first learning from the history given to it and starts predicting based on the previous correlation. KryptoOracle is also capable of reinforcing itself by the mistakes it makes and tries to improve itself at prediction. In addition, the engine supports trend visualization over time based on records of both incoming data and intermediate results. This engine has been built keeping in mind the increasing data volume, velocity and variety that has been made available and is therefore able to scale and manage high volumes of heterogeneous data.
KryptoOracle has been built in the Apache ecosystem and uses Apache Spark. Data structures in Spark are based on resilient distributed datasets (RDD), a read only multi-set of data which can be distributed over a cluster of machines and is fault tolerant. Spark applications run as separate processes on different clusters and are coordinated by the Spark object also referred to as the SparkContext. This element is the main driver of the program which connects with the cluster manager and helps acquire executors on different nodes to allocate resource across applications. Spark is highly scalable, being 100x faster than Hadoop on large datasets, and provides out of the box libraries for both streaming and machine learning.
KryptoOracle ::: Architecture
The growth of the volume of data inspired us to opt for a big data architecture which can not only handle the prediction algorithms but also the streaming and increasing volume of data in a fault tolerant way.
Figure FIGREF2 gives an overview of the architecture design. Central to this design is Apache Spark which acts as an in-memory data store and allows us to perform computations in a scalable manner. This data is the input to our machine learning model for making predictions. To bootstrap our model, we first gather a few days of data and store that in Apache Spark RDDs. Next, we perform computations to construct features from the raw data. All these computations are performed on data that is distributed across multiple Spark clusters and therefore will scale as the data grows continuously.
Once the machine learning model has been bootstrapped, we commence data streaming to get real-time data related to both the social media (in our case, Twitter) and the cryptocurrency. Similar computations are performed on this data to calculate the features and then this new data-point is used to get a future prediction from the model. This computed data-point is then appended to the already existing data in Spark RDDs, obtained from the bootstrap data. Therefore, in addition to making predictions we also keep expanding our data store which allows us to extract holistic visualizations from the data regarding the cryptocurrency market trend and how our own predictions capture that. Moreover, as we discuss later the new data-points are also used to retrain our model.
An important property of this architecture is the persistence of the data and the model. The machine learning model persists itself by storing its weights to disk and loading from it while retraining or reinforcing itself to learn from mistakes. The tweets and cryptocurrency training data is also stored in Apache Hive which provides data warehousing support to read, write and manage distributed datasets directly from disk. This persistence technique helps the whole platform to reset itself without omissions in real time.
Spark RDD has the innate capability to recover itself because it stores all execution steps in a lineage graph. In case of any faults in the system, Spark redoes all the previous executions from the built DAG and recovers itself to the previous steady state from any fault such as memory overload. Spark RDDs lie in the core of KryptoOracle and therefore make it easier for it to recover from faults. Moreover, faults like memory overload or system crashes may require for the whole system to hard reboot. However, due to the duplicate copies of the RDDs in Apache Hive and the stored previous state of the machine learning model, KryptoOracle can easily recover to the previous steady state.
KryptoOracle ::: Sentiment Analysis
In KryptoOracle we focus on sentiment analysis on a document level where each tweet is considered as a single document and we intend to determine its sentiment score. In general, there are primarily two main approaches for sentiment analysis: machine learning-based and lexicon-based. Machine learning-based approaches use classification techniques to classify text, while lexicon-based methods use a sentiment dictionary with opinion words and match them with the data to determine polarity. They assign sentiment scores to the opinion words describing how positive or negative the words contained in the dictionary are BIBREF18. Machine learning-based approaches are inherently supervised and require an adequately large training set for the model to learn the differentiating characteristics of the text corpus. In this paper we choose to forego this training aspect in favour of using a lexicon-based approach. This is because our objective is not to innovate in the natural language processing domain but instead to establish a scalable architecture that is able to capture the relationship between social media sources and financial markets, specifically in the context of the cryptocurrency market.
To measure the sentiment of each tweet VADER (Valence Aware Dictionary and sEntiment Reasoner) is used BIBREF19. VADER is a lexicon and rule-based sentiment analysis tool that is specifically attuned to sentiments expressed in social media. When given a text corpus, VADER outputs three valence scores for each sentiment i.e. positive, negative and neutral. A fourth compound score is computed by summing the valence scores of each word in the lexicon, adjusted according to the rules, and then normalized to be between -1 (extreme negative) and +1 (extreme positive). To summarize, it is a normalized, weighted composite score. This is the most useful metric for us since it provides a single uni-dimensional measure of sentiment for a given tweet. Therefore, we capture the sentiment of each tweet using the compound score.
However, this score is not the final metric that we use to build our machine learning model. It is quite intuitive that tweets belonging to influential personalities should be assigned more weight since they will have a more significant impact on the price of any cryptocurrency. To capture this relationship the compound score is multiplied by the poster's follower count, the number of likes on the tweet and the retweet count. The final score is calculated with the following equation:
The +1 to both the RetweetCount and Likes ensures that the final score does not become zero if there are no likes or re-tweets for the tweet in subject. UserFollowerCount does not have +1 to filter out the numerous bots on Twitter which flood crytocurrency forums. We further normalize the score by taking the root of the final score and multiplying by -1 if the score is negative. This final score belongs to a single tweet and since our prediction scope is for a certain time frame, we sum up all the normalized scores for the different tweets received during that time frame. This summed up score is then used as one of the features for our model to predict the cryptocurrency price for the future time frame.
KryptoOracle ::: Machine Learning
An important element of our architecture is the machine learning model, trained to capture the correlation between social media sentiment and a certain metric of the financial market, in our case, the price of cryptocurrency. An essential characteristic of the model is that it should be able to continuously evolve and adjust its weights according to the ever-changing social media sentiments and the volatile cryptocurrency market. We discuss later how we incorporate this in our model design. However, it is worth mentioning that our problem deals with structured data with features related to the social media sentiments and primitive or computed metrics of the cryptocurrency market.
In prediction problems involving unstructured data, ANNs (Artificial Neural Networks) tend to outperform all other algorithms or frameworks. However, when it comes to small-to-medium structured/tabular data like in our case, decision tree based algorithms are currently considered best-in-class. Therefore, we experimented with a few techniques but then ultimately decided to use XGBoost BIBREF20 owing to its speed, performance and the quality of being easily re-trainable. XGBoost is under development and will be released to work in PySpark. Therefore, at this moment we choose to deploy the model outside of our Spark framework. For bootstrapping the model, historical data points are exported outside the Spark framework and used to train the model initially. After this, as new real-time data arrives it is processed to create a new data-point of the required features. This data-point is then also exported outside Spark and fed to the machine learning model to obtain a prediction for the future price.
To continuously improve the model we employ online learning. The model is saved to disk and after every prediction we wait for the actual price value to arrive. This actual price value is then used to retrain the model as shown in Figure FIGREF5, so that it can learn from the error between the value it had predicted earlier and the actual value that arrived later. In this way the model keeps readjusting its weights to stay up to date with the market trends.
Experimental Evaluation
We used PySpark v2.3 in Jupyter notebooks with Python 2.7 kernels to code KryptoOracle. The entire source code was tested on a server instance on the SOSCIP cloud with 32 GB RAM, 8 CPUs and 120 GB HDD running on Ubuntu 18.04 over a period of 30 days. The data extraction and correlation codes were taken from “Correlation of Twitter sentiments with the evolution of cryptocurrencies," which is publicly availableBIBREF14. The data collected for this experiment was for the Bitcoin cryptocurrency.
Experimental Evaluation ::: Data
The data fed into KryptoOracle is primarily of two types, Twitter data which consists of tweets related to the cryptocurrency and the minutely cryptocurrency value.
Twitter data: We used the Twitter API to scrap tweets with hashtags. For instance, for Bitcoin, the #BTC and #Bitcoin tags were used. The Twitter API only allows a maximum of 450 requests per 15 minute and historical data up to 7 days. Throughout our project we collect data for almost 30 days. Bitcoin had about 25000 tweets per day amounting to a total of approximately 10 MB of data daily. For each tweet, the ID, text, username, number of followers, number of retweets, creation date and time was also stored. All non-English tweets were filtered out by the API. We further processed the full tweet text by removing links, images, videos and hashtags to feed in to the algorithm.
Cryptocurrency data: To obtain cryptocurrency data, the Cryptocompare API BIBREF21 was used. It provides a free API that provides the 7 day minutely values of any cryptocurrency. The data has several fields: time, open, close, high and low that correspond to the opening, closing, high and low values of the cryptocurrency in that particular time frame in USD.
After collecting all the data, we aligned all tweets and cryptocurrecy data by defined time windows of one minute and stored the resulting data into a training data RDD. This training data RDD was further processed as described in the later subsections and then fed into the machine learning algorithm. The same API and structure was also used to stream in real time to KryptoOracle.
Experimental Evaluation ::: Procedure and Results
We started by collecting Twitter data with hashtags #Bitcoin and #BTC for a period of 14 days using Twython, a python library which uses Twitter API to extract tweets using relevant queries. The real time price of Bitcoin was also simultaneously collected using the crytocompare API. The Twitter data was cleaned to remove any hashtags, links, images and videos from the tweets. The sentiment score of each tweet was collected to get the scores as described in the previous section.
To analyze the data, we calculated the Spearman and Pearson correlation between the tweet scores and the Bitcoin prices as shown in Figure FIGREF13. The y-axis of the graphs denote the lag in minutes to see if there was any lag between the arrival of tweets and the Bitcoin prices. The trend of the tweet scores and the corresponding Bitcoin prices is captured in Figure FIGREF6. The hourly summed up Twitter sentiments and their corresponding mean bitcoin price for the hour have been plotted in the graph. It can be seen in the figure that some spikes in sentiment scores correspond directly or with some lag with the Bitcoin price. We also noticed that the volume of incoming streaming tweets in the time of a radical change increases, which results in higher cumulative score for the hour.
The bitcoin price and Twitter sentiment features were not enough to predict the next minute price as they did not capture the ongoing trend. It was therefore important that the historical price of the cryptocurrency was also incorporated in the features so as to get a better prediction for the future. We, therefore, performed some time series manipulation to engineer two new features for our model. The first feature was the Previous Close Price that captured the close price of the cryptocurrency in the previous time frame. The next feature was the Moving Average of Close Price. This feature was a rolling average of the last 100 time frame close prices and aimed to capture the pattern with which the price was constrained to change. A similar new third feature called Moving Average of Scores was designed to capture the rolling average of the last 100 scores. This new feature captured the past sentiment information. With these three additional features combined with the final sentiment score computed in the previous subsection, we got the final training data as shown in Figure FIGREF14.
Once the historical data was stored, all information was fed to the machine learning model. In our experiment, we stored historical data for a month but this can be easily extended as per user requirements.
Once the KryptoOracle engine was bootstrapped with historical data, the real time streamer was started. The real-time tweets scores were calculated in the same way as the historical data and summed up for a minute and sent to the machine learning model with the Bitcoin price in the previous minute and the rolling average price. It predicted the next minute's Bitcoin price from the given data. After the actual price arrived, the RMS value was calculated and the machine learning model updated itself to predict with better understanding the next value. All the calculated values were then stored back to the Spark training RDD for storage. The RDD persisted all the data while training and check-pointed itself to the Hive database after certain period of time.
We ran the engine for one day and got an overall root mean square (RMS) error of 10$ between the actual and the predicted price of Bitcoin. The results for RMS values can be seen below.
Figure FIGREF15 shows the RMS error (in USD) for a period of 5 hours at the end of our experiment. The visualization graph at the end of KryptoOracle can be seen in Figure FIGREF12 which captures the actual price of Bitcoin and the predicted price by KryptoOracle over the same period of 5 hours. The graph shows clearly how KryptoOracle has been able to correctly predict the bitcoin price ahead of 1 minute time. The engine clearly learns from the errors it makes and rewires itself to predict in real-time which can be seen from the adaptive nature of the predicted price graph.
Conclusion and Future Work
In this paper, we present a novel big data platform that can learn, predict and update itself in real time. We tested the engine on Twitter sentiments and cryptocurrency prices. We envision that this engine can be generalized to work on any real time changing market trend such as stock prices, loyalty towards product/company or even election results. Sentiments in real world can be extracted from not only tweets but also chats from IRC channels, news and other sources such as images and videos from YouTube or TV channels. This implies that the platform can be customized for tasks where the objective is to make predictions based on social media sentiments. In future, we plan to create a front-end for this system which can be used to visually capture the trend and also show historical aggregated data as per user input. Such a front-end could also allow the time window for prediction to be tweaked to predict prices for further ahead in time.
We understand that crytocurrency prices are influenced by a lot of factors which cannot be captured by Twitter sentiments. Supply and demand of the coin and interest of major investors are two major factors BIBREF22. To capture these factors one has to add more features to the training data with inferences from multiple sources such as news, political reforms and macro-financial external factors such as stocks, gold rates and exchange rates. While we performed our experiments, the crytocurrency values did not go through any major changes and thus this engine also needs to be tested with more adverse fluctuations. One way to capture fluctuations can be to trace back to the features that have gone through the major changes and adaptively assign them more weights while training the machine learning model.
There is also future work related to the machine learning part of the engine. The state of the art time series machine learning algorithms include the modern deep learning algorithms such as RNNs and LSTMs BIBREF23, but unfortunately Spark does not provide deep learning libraries yet. There are some plugins, such as Sparkflow, that facilitate neural network support, but work is also under way to provide Spark with such in-built deep learning support. Currently, Spark also does not have much streaming machine learning support, other than linear regression and linear classification. However, the advent of additional streaming algorithm support in Spark will certainly benefit engines such as KryptoOracle. | handling large volume incoming data, sentiment analysis on tweets and predictive online learning |
5e997d4499b18f1ee1ef6fa145cadbc018b8dd87 | 5e997d4499b18f1ee1ef6fa145cadbc018b8dd87_0 | Q: What is the source of memes?
Text: Motivation
The spread of misinformation or hate messages through social media is a central societal challenge given the unprecedented broadcast potential of these tools. While there already exist some moderation mechanisms such as crowd-sourced abuse reports and dedicated human teams of moderators, the huge and growing scale of these networks requires some degree of automation for the task.
Social networks have already introduced many tools to detect offensive or misleading content, both for visual and textual content, ranging from nudity and pornography BIBREF0, BIBREF1 to hate speech BIBREF2 and misinformation BIBREF3. However, machine learning is still facing some challenges when processing borderline or figurative content such as nudity in paintings, political satire or other forms of humorous content. In particular for the case of hate speech, rapidly evolving topics and shifting trends in social media make its detection a topic of constant and active research.
This work takes one step forward and instead of focusing on visual or linguistic content alone, we tackle the challenging problem of detecting hate speech in memes. Memes are a form of humorist multimedia document which is normally based on an image with some sort of caption text embedded in the image pixels. Memes have gained a lot of popularity in the last few years and have been used in many different contexts, specially by young people. However, this format has also been used to produce and disseminate hate speech in the form of dark humour. The multimodal nature of memes makes it very challenging to analyze because, while the visual and linguistic information is typically neutral or actually funny in isolation, their combination may result in hate speech messages.
Our work explores the potential of state of the art deep neural networks to detect hate speech in memes. We study the gain in accuracy when detecting hate speech in memes by fusing the vision and language representations, when compared with the two modalities apart. Our experiments indicate that while meme detection is a multimodal problem that benefits by analyzing both modalities, this societal task is far from being solve given the high abstraction level of the messages contained in memes.
Related Work
Hate speech is a widely studied topic in the context of social science. This phenomena has been monitored, tracked, measured or quantified in a number of occasions BIBREF4, BIBREF5, BIBREF6. It appears in media such as newspapers or TV news, but one of the main focus of hate speech with very diverse targets has appeared in social networks BIBREF7, BIBREF8, BIBREF9. Most works in hate speech detection has focused in language. The most common approach is to generate an embedding of some kind, using bag of words BIBREF8 or N-gram features BIBREF10 and many times using expert knowledge for keywords. After that, the embedding is fed to a binary classifier to predict hate speech. Up to our knowledge, there is no previous work on detecting hate speech when combining language with visual content as in memes. Our technical solution is inspired by BIBREF11 in which gang violence on social media was predicted from a multimodal approach that fused images and text. Their model extracted features from both modalities using pretrained embeddings for language and vision, and later merged both vectors to feed the multimodal features into a classifier.
Model
The overall system expects an Internet meme input, and produces a hate score as an output. Figure FIGREF1 shows a block diagram of the proposed solution.
The first step of the process is extracting the text of the image with Optical Character Recognition (OCR). The text detected by the OCR is encoded in a BERT BIBREF12 representation for language. We used the Tesseract 4.0.0 OCR with a Python wrapper on top. This encoding generates contextual (sub)words embeddings, which we turn into a sentence embedding by averaging them. We used a PyTorch implementation available at the repo below . This implementations includes multiple pretrained versions and we chose the one called bert-base-multilingual-cased. This version has 12 layers, 768 hidden dimensions, 12 attention heads with a total of 110M parameters and is trained on 104 languages.
The visual information was encoded with a VGG-16 convolutional neural network BIBREF13, trained on ImageNet BIBREF14. Then we used the activations from a hidden layer as feature vectors for the image, Specifically, we used the last hidden before output, which has 4096 dimensions. We obtained the pretrained model from the TorchVision module in PyTorch.
The text and image encodings were combined by concatenation, which resulted in a feature vector of 4,864 dimensions. This multimodal representation was afterward fed as input into a multi-layer perceptron (MLP) with two hidden layer of 100 neurons with a ReLU activation function. The last single neuron with no activation function was added at the end to predict the hate speech detection score.
Dataset
We built a dataset for the task of hate speech detection in memes with 5,020 images that were weakly labeled into hate or non-hate memes, depending on their source. Hate memes were retrieved from Google Images with a downloading tool. We used the following queries to collect a total of 1,695 hate memes: racist meme (643 memes), jew meme (551 memes), and muslim meme (501 Memes). Non-hate memes were obtained from the Reddit Memes Dataset . We assumed that all memes in the dataset do not contain any hate message, as we considered that average Reddit memes do not belong to this class. A total of 3,325 non-hate memes were collected. We split the dataset into train (4266 memes) and validation (754 memes) subsets. The splits were random and the distribution of classes in the two subsets is the same. We didn't split the dataset into three subsets because of the small amount of data we had and decided to rely on the validation set metrics.
Experiments
Our experiments aimed at estimating the potential of a multimodal hate speech detector, and study the impact of a multimodal analysis when compared to using language or vision only.
We estimated the parameters of the MLP on top of the encoding of the meme with an ADAM optimizer with a lr=0.1, betas=(0.9, 0.999) and $\varepsilon =10^{-8}$, weight decay=0, a batch size of 25, and a drop out of 0.2 on the first hidden layer. The network was trained with a a Mean Squared Error (MSE) loss, but assessed in terms of binary accuracy.
Figure FIGREF7 presents the results of the training with the three considered configurations: language only, vision only, and a multimodal solution. In the single modality cases, the input layer of the MLP is adjusted to the size of the encoded representation. The curves show how the blue line representing the multimodal case obtains the best results, closely followed by the orange one of the vision only case. The language only configuration performs clearly worse than the other two. Nevertheless, the three curves are consistenly over the baseline accuracy of $0.66$, which would be achieved by a dummy predictor of Non-hate class, because of the 34%-66% class imbalance of the dataset.
Table TABREF11 provides numerical results comparing the three configurations based on two different metrics: Max. Accuracy corresponds to the best accuracy obtained in any epoch, while Smth Max. Accuracy corresponds to the smoothed accuracy to which the model was converging. This was estimated by smoothing the curve with a momentum average and picking the best value. We thought the second metric was a good estimation of the real performance of the model due to the huge validation accuracy fluctuation between epochs in evaluation. Also, since the classes are imbalanced, we computed the precision-recall curve for the best multimodal model, getting an Average Precision of $0.81$.
We consider that the superior performance of the vision only configuration over the language only one may be due to a diversity of reasons. Firstly, the most obvious one is that the dimensionality of the image representation (4096) is much larger than the linguistic one (768), so it has the capacity to encode more information. Also, the different models have different number of parameters due to different MLP input and we didn't take into consideration this variation of the model's capacity. Secondly, we think there might a visual bias on the dataset. Mainly, because there are more modern style memes on the no hate class and more classic style memes in the hate class. Classic or modern memes refer basically to the format and placement of the text. Figure FIGREF12 (a) and (b) are examples of them. Also, we found some false positives in the hate class and there might be false negatives in the non-hate Reddit set. Finally, memes are often highly compressed images with an important level of distortion. This fact may affect the quality of the OCR recognition and, therefore, the language encoding, as shown in the Figure FIGREF12 (c).
The training code and models are publicly available to facilitate reproducibility.
Conclusions
Our study on hate speech detection in memes concludes that it is possible to automatize the task, in the sense that a simple configuration using state of the art image and text encoders can detect some of them. However, the problem is far from being solved, because the best accuracy obtained of $0.83$ seems modest despite being much better than greedy solution of predicting always the most frequent class. The proposed system may be used for filtering some of the memes distributed through a social network, but it would still require a human moderator for many of them.
Unfortunately, the system may actually also be used for the opposite of detecting hate speech memes, but helping in their creation. Given a large amount of sentences and images, a misuse of the system may assess the hate score of each possible pair of text and image to find novel combinations with an expect high hate level.
The experiments also show that the visual cues are much more important than the linguistic ones when detecting hate speech memes, a totally opposite scenario to previous studies focusing on language-based hate speech detection. While the best results are obtained with the multimodal approach, the gain with respect to the vision only one is small. A practical deployment of this system should evaluate whether the computation cost of running the OCR and encoding the extracted text is worthy based on the reported gains in accuracy.
The present work poses a new challenge to the multimedia analysis community, which has been proven to be difficult but not impossible. Given the rich affective and societal content in memes, an effective solution should probably also take into account much more additional information than just the one contained in the meme, such as the societal context in which the meme is posted.
Acknowledgements
This work has been developed in the framework of project TEC2016-75976-R, funded by the Spanish Ministerio de Economía y Competitividad and the European Regional Development Fund (ERDF), and the Industrial Doctorate 2017-DI-011 funded by the Government of Catalonia. We gratefully acknowledge the support of NVIDIA Corporation with the donation of some of the GPUs used for this work. | Google Images, Reddit Memes Dataset |
12c7d79d2a26af2d445229d0c8ba3ba1aab3f5b5 | 12c7d79d2a26af2d445229d0c8ba3ba1aab3f5b5_0 | Q: Is the dataset multimodal?
Text: Motivation
The spread of misinformation or hate messages through social media is a central societal challenge given the unprecedented broadcast potential of these tools. While there already exist some moderation mechanisms such as crowd-sourced abuse reports and dedicated human teams of moderators, the huge and growing scale of these networks requires some degree of automation for the task.
Social networks have already introduced many tools to detect offensive or misleading content, both for visual and textual content, ranging from nudity and pornography BIBREF0, BIBREF1 to hate speech BIBREF2 and misinformation BIBREF3. However, machine learning is still facing some challenges when processing borderline or figurative content such as nudity in paintings, political satire or other forms of humorous content. In particular for the case of hate speech, rapidly evolving topics and shifting trends in social media make its detection a topic of constant and active research.
This work takes one step forward and instead of focusing on visual or linguistic content alone, we tackle the challenging problem of detecting hate speech in memes. Memes are a form of humorist multimedia document which is normally based on an image with some sort of caption text embedded in the image pixels. Memes have gained a lot of popularity in the last few years and have been used in many different contexts, specially by young people. However, this format has also been used to produce and disseminate hate speech in the form of dark humour. The multimodal nature of memes makes it very challenging to analyze because, while the visual and linguistic information is typically neutral or actually funny in isolation, their combination may result in hate speech messages.
Our work explores the potential of state of the art deep neural networks to detect hate speech in memes. We study the gain in accuracy when detecting hate speech in memes by fusing the vision and language representations, when compared with the two modalities apart. Our experiments indicate that while meme detection is a multimodal problem that benefits by analyzing both modalities, this societal task is far from being solve given the high abstraction level of the messages contained in memes.
Related Work
Hate speech is a widely studied topic in the context of social science. This phenomena has been monitored, tracked, measured or quantified in a number of occasions BIBREF4, BIBREF5, BIBREF6. It appears in media such as newspapers or TV news, but one of the main focus of hate speech with very diverse targets has appeared in social networks BIBREF7, BIBREF8, BIBREF9. Most works in hate speech detection has focused in language. The most common approach is to generate an embedding of some kind, using bag of words BIBREF8 or N-gram features BIBREF10 and many times using expert knowledge for keywords. After that, the embedding is fed to a binary classifier to predict hate speech. Up to our knowledge, there is no previous work on detecting hate speech when combining language with visual content as in memes. Our technical solution is inspired by BIBREF11 in which gang violence on social media was predicted from a multimodal approach that fused images and text. Their model extracted features from both modalities using pretrained embeddings for language and vision, and later merged both vectors to feed the multimodal features into a classifier.
Model
The overall system expects an Internet meme input, and produces a hate score as an output. Figure FIGREF1 shows a block diagram of the proposed solution.
The first step of the process is extracting the text of the image with Optical Character Recognition (OCR). The text detected by the OCR is encoded in a BERT BIBREF12 representation for language. We used the Tesseract 4.0.0 OCR with a Python wrapper on top. This encoding generates contextual (sub)words embeddings, which we turn into a sentence embedding by averaging them. We used a PyTorch implementation available at the repo below . This implementations includes multiple pretrained versions and we chose the one called bert-base-multilingual-cased. This version has 12 layers, 768 hidden dimensions, 12 attention heads with a total of 110M parameters and is trained on 104 languages.
The visual information was encoded with a VGG-16 convolutional neural network BIBREF13, trained on ImageNet BIBREF14. Then we used the activations from a hidden layer as feature vectors for the image, Specifically, we used the last hidden before output, which has 4096 dimensions. We obtained the pretrained model from the TorchVision module in PyTorch.
The text and image encodings were combined by concatenation, which resulted in a feature vector of 4,864 dimensions. This multimodal representation was afterward fed as input into a multi-layer perceptron (MLP) with two hidden layer of 100 neurons with a ReLU activation function. The last single neuron with no activation function was added at the end to predict the hate speech detection score.
Dataset
We built a dataset for the task of hate speech detection in memes with 5,020 images that were weakly labeled into hate or non-hate memes, depending on their source. Hate memes were retrieved from Google Images with a downloading tool. We used the following queries to collect a total of 1,695 hate memes: racist meme (643 memes), jew meme (551 memes), and muslim meme (501 Memes). Non-hate memes were obtained from the Reddit Memes Dataset . We assumed that all memes in the dataset do not contain any hate message, as we considered that average Reddit memes do not belong to this class. A total of 3,325 non-hate memes were collected. We split the dataset into train (4266 memes) and validation (754 memes) subsets. The splits were random and the distribution of classes in the two subsets is the same. We didn't split the dataset into three subsets because of the small amount of data we had and decided to rely on the validation set metrics.
Experiments
Our experiments aimed at estimating the potential of a multimodal hate speech detector, and study the impact of a multimodal analysis when compared to using language or vision only.
We estimated the parameters of the MLP on top of the encoding of the meme with an ADAM optimizer with a lr=0.1, betas=(0.9, 0.999) and $\varepsilon =10^{-8}$, weight decay=0, a batch size of 25, and a drop out of 0.2 on the first hidden layer. The network was trained with a a Mean Squared Error (MSE) loss, but assessed in terms of binary accuracy.
Figure FIGREF7 presents the results of the training with the three considered configurations: language only, vision only, and a multimodal solution. In the single modality cases, the input layer of the MLP is adjusted to the size of the encoded representation. The curves show how the blue line representing the multimodal case obtains the best results, closely followed by the orange one of the vision only case. The language only configuration performs clearly worse than the other two. Nevertheless, the three curves are consistenly over the baseline accuracy of $0.66$, which would be achieved by a dummy predictor of Non-hate class, because of the 34%-66% class imbalance of the dataset.
Table TABREF11 provides numerical results comparing the three configurations based on two different metrics: Max. Accuracy corresponds to the best accuracy obtained in any epoch, while Smth Max. Accuracy corresponds to the smoothed accuracy to which the model was converging. This was estimated by smoothing the curve with a momentum average and picking the best value. We thought the second metric was a good estimation of the real performance of the model due to the huge validation accuracy fluctuation between epochs in evaluation. Also, since the classes are imbalanced, we computed the precision-recall curve for the best multimodal model, getting an Average Precision of $0.81$.
We consider that the superior performance of the vision only configuration over the language only one may be due to a diversity of reasons. Firstly, the most obvious one is that the dimensionality of the image representation (4096) is much larger than the linguistic one (768), so it has the capacity to encode more information. Also, the different models have different number of parameters due to different MLP input and we didn't take into consideration this variation of the model's capacity. Secondly, we think there might a visual bias on the dataset. Mainly, because there are more modern style memes on the no hate class and more classic style memes in the hate class. Classic or modern memes refer basically to the format and placement of the text. Figure FIGREF12 (a) and (b) are examples of them. Also, we found some false positives in the hate class and there might be false negatives in the non-hate Reddit set. Finally, memes are often highly compressed images with an important level of distortion. This fact may affect the quality of the OCR recognition and, therefore, the language encoding, as shown in the Figure FIGREF12 (c).
The training code and models are publicly available to facilitate reproducibility.
Conclusions
Our study on hate speech detection in memes concludes that it is possible to automatize the task, in the sense that a simple configuration using state of the art image and text encoders can detect some of them. However, the problem is far from being solved, because the best accuracy obtained of $0.83$ seems modest despite being much better than greedy solution of predicting always the most frequent class. The proposed system may be used for filtering some of the memes distributed through a social network, but it would still require a human moderator for many of them.
Unfortunately, the system may actually also be used for the opposite of detecting hate speech memes, but helping in their creation. Given a large amount of sentences and images, a misuse of the system may assess the hate score of each possible pair of text and image to find novel combinations with an expect high hate level.
The experiments also show that the visual cues are much more important than the linguistic ones when detecting hate speech memes, a totally opposite scenario to previous studies focusing on language-based hate speech detection. While the best results are obtained with the multimodal approach, the gain with respect to the vision only one is small. A practical deployment of this system should evaluate whether the computation cost of running the OCR and encoding the extracted text is worthy based on the reported gains in accuracy.
The present work poses a new challenge to the multimedia analysis community, which has been proven to be difficult but not impossible. Given the rich affective and societal content in memes, an effective solution should probably also take into account much more additional information than just the one contained in the meme, such as the societal context in which the meme is posted.
Acknowledgements
This work has been developed in the framework of project TEC2016-75976-R, funded by the Spanish Ministerio de Economía y Competitividad and the European Regional Development Fund (ERDF), and the Industrial Doctorate 2017-DI-011 funded by the Government of Catalonia. We gratefully acknowledge the support of NVIDIA Corporation with the donation of some of the GPUs used for this work. | Yes |
98daaa9eaa1e1e574be336b8933b861bfd242e5e | 98daaa9eaa1e1e574be336b8933b861bfd242e5e_0 | Q: How is each instance of the dataset annotated?
Text: Motivation
The spread of misinformation or hate messages through social media is a central societal challenge given the unprecedented broadcast potential of these tools. While there already exist some moderation mechanisms such as crowd-sourced abuse reports and dedicated human teams of moderators, the huge and growing scale of these networks requires some degree of automation for the task.
Social networks have already introduced many tools to detect offensive or misleading content, both for visual and textual content, ranging from nudity and pornography BIBREF0, BIBREF1 to hate speech BIBREF2 and misinformation BIBREF3. However, machine learning is still facing some challenges when processing borderline or figurative content such as nudity in paintings, political satire or other forms of humorous content. In particular for the case of hate speech, rapidly evolving topics and shifting trends in social media make its detection a topic of constant and active research.
This work takes one step forward and instead of focusing on visual or linguistic content alone, we tackle the challenging problem of detecting hate speech in memes. Memes are a form of humorist multimedia document which is normally based on an image with some sort of caption text embedded in the image pixels. Memes have gained a lot of popularity in the last few years and have been used in many different contexts, specially by young people. However, this format has also been used to produce and disseminate hate speech in the form of dark humour. The multimodal nature of memes makes it very challenging to analyze because, while the visual and linguistic information is typically neutral or actually funny in isolation, their combination may result in hate speech messages.
Our work explores the potential of state of the art deep neural networks to detect hate speech in memes. We study the gain in accuracy when detecting hate speech in memes by fusing the vision and language representations, when compared with the two modalities apart. Our experiments indicate that while meme detection is a multimodal problem that benefits by analyzing both modalities, this societal task is far from being solve given the high abstraction level of the messages contained in memes.
Related Work
Hate speech is a widely studied topic in the context of social science. This phenomena has been monitored, tracked, measured or quantified in a number of occasions BIBREF4, BIBREF5, BIBREF6. It appears in media such as newspapers or TV news, but one of the main focus of hate speech with very diverse targets has appeared in social networks BIBREF7, BIBREF8, BIBREF9. Most works in hate speech detection has focused in language. The most common approach is to generate an embedding of some kind, using bag of words BIBREF8 or N-gram features BIBREF10 and many times using expert knowledge for keywords. After that, the embedding is fed to a binary classifier to predict hate speech. Up to our knowledge, there is no previous work on detecting hate speech when combining language with visual content as in memes. Our technical solution is inspired by BIBREF11 in which gang violence on social media was predicted from a multimodal approach that fused images and text. Their model extracted features from both modalities using pretrained embeddings for language and vision, and later merged both vectors to feed the multimodal features into a classifier.
Model
The overall system expects an Internet meme input, and produces a hate score as an output. Figure FIGREF1 shows a block diagram of the proposed solution.
The first step of the process is extracting the text of the image with Optical Character Recognition (OCR). The text detected by the OCR is encoded in a BERT BIBREF12 representation for language. We used the Tesseract 4.0.0 OCR with a Python wrapper on top. This encoding generates contextual (sub)words embeddings, which we turn into a sentence embedding by averaging them. We used a PyTorch implementation available at the repo below . This implementations includes multiple pretrained versions and we chose the one called bert-base-multilingual-cased. This version has 12 layers, 768 hidden dimensions, 12 attention heads with a total of 110M parameters and is trained on 104 languages.
The visual information was encoded with a VGG-16 convolutional neural network BIBREF13, trained on ImageNet BIBREF14. Then we used the activations from a hidden layer as feature vectors for the image, Specifically, we used the last hidden before output, which has 4096 dimensions. We obtained the pretrained model from the TorchVision module in PyTorch.
The text and image encodings were combined by concatenation, which resulted in a feature vector of 4,864 dimensions. This multimodal representation was afterward fed as input into a multi-layer perceptron (MLP) with two hidden layer of 100 neurons with a ReLU activation function. The last single neuron with no activation function was added at the end to predict the hate speech detection score.
Dataset
We built a dataset for the task of hate speech detection in memes with 5,020 images that were weakly labeled into hate or non-hate memes, depending on their source. Hate memes were retrieved from Google Images with a downloading tool. We used the following queries to collect a total of 1,695 hate memes: racist meme (643 memes), jew meme (551 memes), and muslim meme (501 Memes). Non-hate memes were obtained from the Reddit Memes Dataset . We assumed that all memes in the dataset do not contain any hate message, as we considered that average Reddit memes do not belong to this class. A total of 3,325 non-hate memes were collected. We split the dataset into train (4266 memes) and validation (754 memes) subsets. The splits were random and the distribution of classes in the two subsets is the same. We didn't split the dataset into three subsets because of the small amount of data we had and decided to rely on the validation set metrics.
Experiments
Our experiments aimed at estimating the potential of a multimodal hate speech detector, and study the impact of a multimodal analysis when compared to using language or vision only.
We estimated the parameters of the MLP on top of the encoding of the meme with an ADAM optimizer with a lr=0.1, betas=(0.9, 0.999) and $\varepsilon =10^{-8}$, weight decay=0, a batch size of 25, and a drop out of 0.2 on the first hidden layer. The network was trained with a a Mean Squared Error (MSE) loss, but assessed in terms of binary accuracy.
Figure FIGREF7 presents the results of the training with the three considered configurations: language only, vision only, and a multimodal solution. In the single modality cases, the input layer of the MLP is adjusted to the size of the encoded representation. The curves show how the blue line representing the multimodal case obtains the best results, closely followed by the orange one of the vision only case. The language only configuration performs clearly worse than the other two. Nevertheless, the three curves are consistenly over the baseline accuracy of $0.66$, which would be achieved by a dummy predictor of Non-hate class, because of the 34%-66% class imbalance of the dataset.
Table TABREF11 provides numerical results comparing the three configurations based on two different metrics: Max. Accuracy corresponds to the best accuracy obtained in any epoch, while Smth Max. Accuracy corresponds to the smoothed accuracy to which the model was converging. This was estimated by smoothing the curve with a momentum average and picking the best value. We thought the second metric was a good estimation of the real performance of the model due to the huge validation accuracy fluctuation between epochs in evaluation. Also, since the classes are imbalanced, we computed the precision-recall curve for the best multimodal model, getting an Average Precision of $0.81$.
We consider that the superior performance of the vision only configuration over the language only one may be due to a diversity of reasons. Firstly, the most obvious one is that the dimensionality of the image representation (4096) is much larger than the linguistic one (768), so it has the capacity to encode more information. Also, the different models have different number of parameters due to different MLP input and we didn't take into consideration this variation of the model's capacity. Secondly, we think there might a visual bias on the dataset. Mainly, because there are more modern style memes on the no hate class and more classic style memes in the hate class. Classic or modern memes refer basically to the format and placement of the text. Figure FIGREF12 (a) and (b) are examples of them. Also, we found some false positives in the hate class and there might be false negatives in the non-hate Reddit set. Finally, memes are often highly compressed images with an important level of distortion. This fact may affect the quality of the OCR recognition and, therefore, the language encoding, as shown in the Figure FIGREF12 (c).
The training code and models are publicly available to facilitate reproducibility.
Conclusions
Our study on hate speech detection in memes concludes that it is possible to automatize the task, in the sense that a simple configuration using state of the art image and text encoders can detect some of them. However, the problem is far from being solved, because the best accuracy obtained of $0.83$ seems modest despite being much better than greedy solution of predicting always the most frequent class. The proposed system may be used for filtering some of the memes distributed through a social network, but it would still require a human moderator for many of them.
Unfortunately, the system may actually also be used for the opposite of detecting hate speech memes, but helping in their creation. Given a large amount of sentences and images, a misuse of the system may assess the hate score of each possible pair of text and image to find novel combinations with an expect high hate level.
The experiments also show that the visual cues are much more important than the linguistic ones when detecting hate speech memes, a totally opposite scenario to previous studies focusing on language-based hate speech detection. While the best results are obtained with the multimodal approach, the gain with respect to the vision only one is small. A practical deployment of this system should evaluate whether the computation cost of running the OCR and encoding the extracted text is worthy based on the reported gains in accuracy.
The present work poses a new challenge to the multimedia analysis community, which has been proven to be difficult but not impossible. Given the rich affective and societal content in memes, an effective solution should probably also take into account much more additional information than just the one contained in the meme, such as the societal context in which the meme is posted.
Acknowledgements
This work has been developed in the framework of project TEC2016-75976-R, funded by the Spanish Ministerio de Economía y Competitividad and the European Regional Development Fund (ERDF), and the Industrial Doctorate 2017-DI-011 funded by the Government of Catalonia. We gratefully acknowledge the support of NVIDIA Corporation with the donation of some of the GPUs used for this work. | weakly labeled into hate or non-hate memes, depending on their source |
a93196fb0fb5f8202912971e14552fd7828976db | a93196fb0fb5f8202912971e14552fd7828976db_0 | Q: Which dataset do they use for text modelling?
Text: Introduction
Variational Autoencoder (VAE) BIBREF1 is a powerful method for learning representations of high-dimensional data. However, recent attempts of applying VAEs to text modelling are still far less successful compared to its application to image and speech BIBREF2, BIBREF3, BIBREF4. When applying VAEs for text modelling, recurrent neural networks (RNNs) are commonly used as the architecture for both encoder and decoder BIBREF0, BIBREF5, BIBREF6. While such a VAE-RNN based architecture allows encoding and generating sentences (in the decoding phase) with variable-length effectively, it is also vulnerable to an issue known as latent variable collapse (or KL loss vanishing), where the posterior collapses to the prior and the model will ignore the latent codes in generative tasks.
Various efforts have been made to alleviate the latent variable collapse issue. BIBREF0 uses KL annealing, where a variable weight is added to the KL term in the cost function at training time. BIBREF7 discovered that there is a trade-off between the contextual capacity of the decoder and effective use of encoding information, and developed a dilated CNN as decoder which can vary the amount of conditioning context. They also introduced a loss clipping strategy in order to make the model more robust. BIBREF5 addressed the problem by replacing the standard normal distribution for the prior with the von Mises-Fisher (vMF) distribution. With vMF, the KL loss only depends on the concentration parameter which is fixed during training and testing, and hence results in a constant KL loss. In a more recent work, BIBREF6 avoided latent variable collapse by including skip connections in the generative model, where the skip connections enforce strong links between the latent variables and the likelihood function.
Although the aforementioned works show effectiveness in addressing the latent variable collapse issue to some extent, they either require carefully engineering to balance the weight between the reconstruction loss and KL loss BIBREF0, BIBREF8, or resort to designing more sophisticated model structures BIBREF7, BIBREF5, BIBREF6.
In this paper, we present a simple architecture called holistic regularisation VAE (HR-VAE), which can effectively avoid latent variable collapse. In contrast to existing VAE-RNN models for text modelling which merely impose a standard normal distribution prior on the last hidden state of the RNN encoder, our HR-VAE model imposes regularisation for all hidden states of the RNN encoder. Another advantage of our model is that it is generic and can be applied to any existing VAE-RNN-based architectures.
We evaluate our model against several strong baselines which apply VAE for text modelling BIBREF0, BIBREF7, BIBREF5. We conducted experiments based on two public benchmark datasets, namely, the Penn Treebank dataset BIBREF9 and the end-to-end (E2E) text generation dataset BIBREF10. Experimental results show that our HR-VAE model not only can effectively mitigate the latent variable collapse issue with a stable training process, but also can give better predictive performance than the baselines, as evidenced by both quantitative (e.g., negative log likelihood and perplexity) and qualitative evaluation. The code for our model is available online.
Methodology ::: Background of VAE
A variational autoencoder (VAE) is a deep generative model, which combines variational inference with deep learning. The VAE modifies the conventional autoencoder architecture by replacing the deterministic latent representation $\mathbf {z}$ of an input $\mathbf {x}$ with a posterior distribution $P(\mathbf {z}|\mathbf {x})$, and imposing a prior distribution on the posterior, such that the model allows sampling from any point of the latent space and yet able to generate novel and plausible output. The prior is typically chosen to be standard normal distributions, i.e., $P(\mathbf {z}) = \mathcal {N}(\mathbf {0},\mathbf {1})$, such that the KL divergence between posterior and prior can be computed in closed form BIBREF1.
To train a VAE, we need to optimise the marginal likelihood $P_{\theta }(\mathbf {x})=\int {P(\mathbf {z})P_{\theta }(\mathbf {x}|\mathbf {z})d\mathbf {z}}$, where the log likelihood can take following form:
Here $Q_{\phi }(\mathbf {z}|\mathbf {x})$ is the variational approximation for the true posterior $P_{\theta }(\mathbf {z}|\mathbf {x})$. Specifically, $Q_{\phi }(\mathbf {z}|\mathbf {x})$ can be regarded as an encoder (a.k.a. the recognition model) and $P_{\theta }(\mathbf {x}|\mathbf {z})$ the decoder (a.k.a. the generative model). Both encoder and decoder are implemented via neural networks. As proved in BIBREF1, optimising the marginal log likelihood is essentially equivalent to maximising $\mathcal {L}(\theta ,\phi ;\mathbf {x})$, i.e., the evidence lower bound (ELBO), which consists of two terms. The first term is the expected reconstruction error indicating how well the model can reconstruct data given a latent variable. The the second term is the KL divergence of the approximate posterior from prior, i.e., a regularisation pushing the learned posterior to be as close to the prior as possible.
Methodology ::: Variational Autoendoder with Holistic Regularisation
In this section, we discuss the technical details of the proposed holistic regularisation VAE (HR-VAE) model, a general architecture which can effectively mitigate the KL vanishing phenomenon.
Our model design is motivated by one noticeable defect shared by the VAE-RNN based models in previous works BIBREF0, BIBREF7, BIBREF5, BIBREF6. That is, all these models, as shown in Figure FIGREF2, only impose a standard normal distribution prior on the last hidden state of the RNN encoder, which potentially leads to learning a suboptimal representation of the latent variable and results in model vulnerable to KL loss vanishing. Our hypothesis is that to learn a good representation of data and a good generative model, it is crucial to impose the standard normal prior on all the hidden states of the RNN-based encoder (see Figure FIGREF2), which allows a better regularisation of the model learning process.
We implement the HR-VAE model using a two-layer LSTM for both the encoder and decoder. However, one should note that our architecture can be readily applied to other types of RNN such as GRU. For each time stamp $t$ (see Figure FIGREF2), we concatenate the hidden state $\mathbf {h}_t$ and the cell state $\mathbf {c}_t$ of the encoder. The concatenation (i.e., $[\mathbf {h}_t;\mathbf {c}_t]$) is then fed into two linear transformation layers for estimating $\mu _t$ and $\sigma ^2_t$, which are parameters of a normal distribution corresponding to the concatenation of $\mathbf {h}_t$ and $\mathbf {c}_t$. Let $Q_{\phi _t}(\mathbf {z}_t | \mathbf {x})=\mathcal {N}(\mathbf {z}_t|\mu _t,\sigma ^2_t)$, we wish $Q_{\phi _t}(\mathbf {z}_t | \mathbf {x})$ to be close to a prior $P(\mathbf {z}_t)$, which is a standard Gaussian. Finally, the KL divergence between these two multivariate Gaussian distributions (i.e., $Q_{\phi _t}$ and $P(\mathbf {z}_t)$) will contribute to the overall KL loss of the ELBO. By taking the average of the KL loss at each time stamp $t$, the resulting ELBO takes the following form
As can be seen in Eq. DISPLAY_FORM10, our solution to the KL collapse issue does not require any engineering for balancing the weight between the reconstruction term and KL loss as commonly the case in existing works BIBREF0, BIBREF8. The weight between these two terms of our model is simply $1:1$.
Experimental Setup ::: Datasets
We evaluate our model on two public datasets, namely, Penn Treebank (PTB) BIBREF9 and the end-to-end (E2E) text generation corpus BIBREF10, which have been used in a number of previous works for text generation BIBREF0, BIBREF5, BIBREF11, BIBREF12. PTB consists of more than 40,000 sentences from Wall Street Journal articles whereas the E2E dataset contains over 50,000 sentences of restaurant reviews. The statistics of these two datasets are summarised in Table TABREF11.
Experimental Setup ::: Implementation Details
For the PTB dataset, we used the train-test split following BIBREF0, BIBREF5. For the E2E dataset, we used the train-test split from the original dataset BIBREF10 and indexed the words with a frequency higher than 3. We represent input data with 512-dimensional word2vec embeddings BIBREF13. We set the dimension of the hidden layers of both encoder and decoder to 256. The Adam optimiser BIBREF14 was used for training with an initial learning rate of 0.0001. Each utterance in a mini-batch was padded to the maximum length for that batch, and the maximum batch-size allowed is 128.
Experimental Setup ::: Baselines
We compare our HR-VAE model with three strong baselines using VAE for text modelling:
VAE-LSTM-base: A variational autoencoder model which uses LSTM for both encoder and decoder. KL annealing is used to tackled the latent variable collapse issue BIBREF0;
VAE-CNN: A variational autoencoder model with a LSTM encoder and a dilated CNN decoder BIBREF7;
vMF-VAE: A variational autoencoder model using LSTM for both encoder and decoder where the prior distribution is the von Mises-Fisher (vMF) distribution rather than a Gaussian distribution BIBREF5.
Experimental Results
We evaluate our HR-VAE model in two experimental settings, following the setup of BIBREF0, BIBREF5. In the standard setting, the input to the decoder at each time stamp is the concatenation of latent variable $\mathbf {z}$ and the ground truth word of the previous time stamp. Under this setting, the decoder will be more powerful because it uses the ground truth word as input, resulting in little information of the training data captured by latent variable $\mathbf {z}$. The inputless setting, in contrast, does not use the previous ground truth word as input for the decoder. In other words, the decoder needs to predict the entire sequence with only the help of the given latent variable $\mathbf {z}$. In this way, a high-quality representation abstracting the information of the input sentence is much needed for the decoder, and hence enforcing $\mathbf {z}$ to learn the required information.
Overall performance. Table TABREF13 shows the language modelling results of our approach and the baselines. We report negative log likelihood (NLL), KL loss, and perplexity (PPL) on the test set. As expected, all the models have a higher KL loss in the inputless setting than the standard setting, as $\mathbf {z}$ is required to encode more information about the input data for reconstruction. In terms of overall performance, our model outperforms all the baselines in both datasets (i.e., PTB and E2E). For instance, when comparing with the strongest baseline vMF-VAE in the standard setting, our model reduces NLL from 96 to 79 and PPL from 98 to 43 in PTB, respectively. In the inputless setting, our performance gain is even higher, i.e., NLL reduced from 117 to 85 and PPL from 262 to 54. A similar pattern can be observed for the E2E dataset. These observations suggest that our approach can learn a better generative model for data.
Loss analysis. To conduct a more thorough evaluation, we further investigate model behaviours in terms of both reconstruction loss and KL loss, as shown in Figure FIGREF14. These plots were obtained based on the E2E training set using the inputless setting.
We can see that the KL loss of VAE-LSTM-base, which uses Sigmoid annealing BIBREF0, collapses to zero, leading to a poor generative performance as indicated by the high reconstruction loss. The KL loss for both VAE-CNN and vMF-VAE are nonzero, where the former mitigates the KL collapse issue with a KL loss clipping strategy and the latter by replacing the standard normal distribution for the prior with the vMF distribution (i.e., with the vMF distribution, the KL loss only depends on a fixed concentration parameter, and hence results in a constant KL loss). Although both VAE-CNN and vMF-VAE outperform VAE-LSTM-base by a large margin in terms of reconstruction loss as shown in Figure FIGREF14, one should also notice that these two models actually overfit the training data, as their performance on the test set is much worse (cf. Table TABREF13). In contrast to the baselines which mitigate the KL collapse issue by carefully engineering the weight between the reconstruction loss and KL loss or choosing a different choice of prior, we provide a simple and elegant solution through holistic KL regularisation, which can effectively mitigate the KL collapse issue and achieve a better reconstruction error in both training and testing.
Sentence reconstruction. Lastly, we show some sentence examples reconstructed by vMF-VAE (i.e., the best baseline) and our model in the inputless setting using sentences from the E2E test set as input. As shown in Table TABREF15, the sentences generated by vMF-VAE contain repeated words in quite a few cases, such as `city city area' and `blue spice spice'. In addition, vMF-VAE also tends to generate unnecessary or unrelated words at the end of sentences, making the generated sentences ungrammatical. The sentences reconstructed by our model, in contrast, are more grammatical and more similar to the corresponding ground truth sentences than vMF-VAE.
Conclusion
In this paper, we present a simple and generic architecture called holistic regularisation VAE (HR-VAE), which can effectively avoid latent variable collapse. In contrast to existing VAE-RNN models which merely impose a standard normal distribution prior on the last hidden state of the RNN encoder, our HR-VAE model imposes regularisation on all the hidden states, allowing a better regularisation of the model learning process. Empirical results show that our model can effectively mitigate the latent variable collapse issue while giving a better predictive performance than the baselines.
Acknowledgment
This work is supported by the award made by the UK Engineering and Physical Sciences Research Council (Grant number: EP/P011829/1). | Penn Treebank (PTB), end-to-end (E2E) text generation corpus |
983c2fe7bdbf471bb8b15db858fd2cbec86b96a5 | 983c2fe7bdbf471bb8b15db858fd2cbec86b96a5_0 | Q: Do they compare against state of the art text generation?
Text: Introduction
Variational Autoencoder (VAE) BIBREF1 is a powerful method for learning representations of high-dimensional data. However, recent attempts of applying VAEs to text modelling are still far less successful compared to its application to image and speech BIBREF2, BIBREF3, BIBREF4. When applying VAEs for text modelling, recurrent neural networks (RNNs) are commonly used as the architecture for both encoder and decoder BIBREF0, BIBREF5, BIBREF6. While such a VAE-RNN based architecture allows encoding and generating sentences (in the decoding phase) with variable-length effectively, it is also vulnerable to an issue known as latent variable collapse (or KL loss vanishing), where the posterior collapses to the prior and the model will ignore the latent codes in generative tasks.
Various efforts have been made to alleviate the latent variable collapse issue. BIBREF0 uses KL annealing, where a variable weight is added to the KL term in the cost function at training time. BIBREF7 discovered that there is a trade-off between the contextual capacity of the decoder and effective use of encoding information, and developed a dilated CNN as decoder which can vary the amount of conditioning context. They also introduced a loss clipping strategy in order to make the model more robust. BIBREF5 addressed the problem by replacing the standard normal distribution for the prior with the von Mises-Fisher (vMF) distribution. With vMF, the KL loss only depends on the concentration parameter which is fixed during training and testing, and hence results in a constant KL loss. In a more recent work, BIBREF6 avoided latent variable collapse by including skip connections in the generative model, where the skip connections enforce strong links between the latent variables and the likelihood function.
Although the aforementioned works show effectiveness in addressing the latent variable collapse issue to some extent, they either require carefully engineering to balance the weight between the reconstruction loss and KL loss BIBREF0, BIBREF8, or resort to designing more sophisticated model structures BIBREF7, BIBREF5, BIBREF6.
In this paper, we present a simple architecture called holistic regularisation VAE (HR-VAE), which can effectively avoid latent variable collapse. In contrast to existing VAE-RNN models for text modelling which merely impose a standard normal distribution prior on the last hidden state of the RNN encoder, our HR-VAE model imposes regularisation for all hidden states of the RNN encoder. Another advantage of our model is that it is generic and can be applied to any existing VAE-RNN-based architectures.
We evaluate our model against several strong baselines which apply VAE for text modelling BIBREF0, BIBREF7, BIBREF5. We conducted experiments based on two public benchmark datasets, namely, the Penn Treebank dataset BIBREF9 and the end-to-end (E2E) text generation dataset BIBREF10. Experimental results show that our HR-VAE model not only can effectively mitigate the latent variable collapse issue with a stable training process, but also can give better predictive performance than the baselines, as evidenced by both quantitative (e.g., negative log likelihood and perplexity) and qualitative evaluation. The code for our model is available online.
Methodology ::: Background of VAE
A variational autoencoder (VAE) is a deep generative model, which combines variational inference with deep learning. The VAE modifies the conventional autoencoder architecture by replacing the deterministic latent representation $\mathbf {z}$ of an input $\mathbf {x}$ with a posterior distribution $P(\mathbf {z}|\mathbf {x})$, and imposing a prior distribution on the posterior, such that the model allows sampling from any point of the latent space and yet able to generate novel and plausible output. The prior is typically chosen to be standard normal distributions, i.e., $P(\mathbf {z}) = \mathcal {N}(\mathbf {0},\mathbf {1})$, such that the KL divergence between posterior and prior can be computed in closed form BIBREF1.
To train a VAE, we need to optimise the marginal likelihood $P_{\theta }(\mathbf {x})=\int {P(\mathbf {z})P_{\theta }(\mathbf {x}|\mathbf {z})d\mathbf {z}}$, where the log likelihood can take following form:
Here $Q_{\phi }(\mathbf {z}|\mathbf {x})$ is the variational approximation for the true posterior $P_{\theta }(\mathbf {z}|\mathbf {x})$. Specifically, $Q_{\phi }(\mathbf {z}|\mathbf {x})$ can be regarded as an encoder (a.k.a. the recognition model) and $P_{\theta }(\mathbf {x}|\mathbf {z})$ the decoder (a.k.a. the generative model). Both encoder and decoder are implemented via neural networks. As proved in BIBREF1, optimising the marginal log likelihood is essentially equivalent to maximising $\mathcal {L}(\theta ,\phi ;\mathbf {x})$, i.e., the evidence lower bound (ELBO), which consists of two terms. The first term is the expected reconstruction error indicating how well the model can reconstruct data given a latent variable. The the second term is the KL divergence of the approximate posterior from prior, i.e., a regularisation pushing the learned posterior to be as close to the prior as possible.
Methodology ::: Variational Autoendoder with Holistic Regularisation
In this section, we discuss the technical details of the proposed holistic regularisation VAE (HR-VAE) model, a general architecture which can effectively mitigate the KL vanishing phenomenon.
Our model design is motivated by one noticeable defect shared by the VAE-RNN based models in previous works BIBREF0, BIBREF7, BIBREF5, BIBREF6. That is, all these models, as shown in Figure FIGREF2, only impose a standard normal distribution prior on the last hidden state of the RNN encoder, which potentially leads to learning a suboptimal representation of the latent variable and results in model vulnerable to KL loss vanishing. Our hypothesis is that to learn a good representation of data and a good generative model, it is crucial to impose the standard normal prior on all the hidden states of the RNN-based encoder (see Figure FIGREF2), which allows a better regularisation of the model learning process.
We implement the HR-VAE model using a two-layer LSTM for both the encoder and decoder. However, one should note that our architecture can be readily applied to other types of RNN such as GRU. For each time stamp $t$ (see Figure FIGREF2), we concatenate the hidden state $\mathbf {h}_t$ and the cell state $\mathbf {c}_t$ of the encoder. The concatenation (i.e., $[\mathbf {h}_t;\mathbf {c}_t]$) is then fed into two linear transformation layers for estimating $\mu _t$ and $\sigma ^2_t$, which are parameters of a normal distribution corresponding to the concatenation of $\mathbf {h}_t$ and $\mathbf {c}_t$. Let $Q_{\phi _t}(\mathbf {z}_t | \mathbf {x})=\mathcal {N}(\mathbf {z}_t|\mu _t,\sigma ^2_t)$, we wish $Q_{\phi _t}(\mathbf {z}_t | \mathbf {x})$ to be close to a prior $P(\mathbf {z}_t)$, which is a standard Gaussian. Finally, the KL divergence between these two multivariate Gaussian distributions (i.e., $Q_{\phi _t}$ and $P(\mathbf {z}_t)$) will contribute to the overall KL loss of the ELBO. By taking the average of the KL loss at each time stamp $t$, the resulting ELBO takes the following form
As can be seen in Eq. DISPLAY_FORM10, our solution to the KL collapse issue does not require any engineering for balancing the weight between the reconstruction term and KL loss as commonly the case in existing works BIBREF0, BIBREF8. The weight between these two terms of our model is simply $1:1$.
Experimental Setup ::: Datasets
We evaluate our model on two public datasets, namely, Penn Treebank (PTB) BIBREF9 and the end-to-end (E2E) text generation corpus BIBREF10, which have been used in a number of previous works for text generation BIBREF0, BIBREF5, BIBREF11, BIBREF12. PTB consists of more than 40,000 sentences from Wall Street Journal articles whereas the E2E dataset contains over 50,000 sentences of restaurant reviews. The statistics of these two datasets are summarised in Table TABREF11.
Experimental Setup ::: Implementation Details
For the PTB dataset, we used the train-test split following BIBREF0, BIBREF5. For the E2E dataset, we used the train-test split from the original dataset BIBREF10 and indexed the words with a frequency higher than 3. We represent input data with 512-dimensional word2vec embeddings BIBREF13. We set the dimension of the hidden layers of both encoder and decoder to 256. The Adam optimiser BIBREF14 was used for training with an initial learning rate of 0.0001. Each utterance in a mini-batch was padded to the maximum length for that batch, and the maximum batch-size allowed is 128.
Experimental Setup ::: Baselines
We compare our HR-VAE model with three strong baselines using VAE for text modelling:
VAE-LSTM-base: A variational autoencoder model which uses LSTM for both encoder and decoder. KL annealing is used to tackled the latent variable collapse issue BIBREF0;
VAE-CNN: A variational autoencoder model with a LSTM encoder and a dilated CNN decoder BIBREF7;
vMF-VAE: A variational autoencoder model using LSTM for both encoder and decoder where the prior distribution is the von Mises-Fisher (vMF) distribution rather than a Gaussian distribution BIBREF5.
Experimental Results
We evaluate our HR-VAE model in two experimental settings, following the setup of BIBREF0, BIBREF5. In the standard setting, the input to the decoder at each time stamp is the concatenation of latent variable $\mathbf {z}$ and the ground truth word of the previous time stamp. Under this setting, the decoder will be more powerful because it uses the ground truth word as input, resulting in little information of the training data captured by latent variable $\mathbf {z}$. The inputless setting, in contrast, does not use the previous ground truth word as input for the decoder. In other words, the decoder needs to predict the entire sequence with only the help of the given latent variable $\mathbf {z}$. In this way, a high-quality representation abstracting the information of the input sentence is much needed for the decoder, and hence enforcing $\mathbf {z}$ to learn the required information.
Overall performance. Table TABREF13 shows the language modelling results of our approach and the baselines. We report negative log likelihood (NLL), KL loss, and perplexity (PPL) on the test set. As expected, all the models have a higher KL loss in the inputless setting than the standard setting, as $\mathbf {z}$ is required to encode more information about the input data for reconstruction. In terms of overall performance, our model outperforms all the baselines in both datasets (i.e., PTB and E2E). For instance, when comparing with the strongest baseline vMF-VAE in the standard setting, our model reduces NLL from 96 to 79 and PPL from 98 to 43 in PTB, respectively. In the inputless setting, our performance gain is even higher, i.e., NLL reduced from 117 to 85 and PPL from 262 to 54. A similar pattern can be observed for the E2E dataset. These observations suggest that our approach can learn a better generative model for data.
Loss analysis. To conduct a more thorough evaluation, we further investigate model behaviours in terms of both reconstruction loss and KL loss, as shown in Figure FIGREF14. These plots were obtained based on the E2E training set using the inputless setting.
We can see that the KL loss of VAE-LSTM-base, which uses Sigmoid annealing BIBREF0, collapses to zero, leading to a poor generative performance as indicated by the high reconstruction loss. The KL loss for both VAE-CNN and vMF-VAE are nonzero, where the former mitigates the KL collapse issue with a KL loss clipping strategy and the latter by replacing the standard normal distribution for the prior with the vMF distribution (i.e., with the vMF distribution, the KL loss only depends on a fixed concentration parameter, and hence results in a constant KL loss). Although both VAE-CNN and vMF-VAE outperform VAE-LSTM-base by a large margin in terms of reconstruction loss as shown in Figure FIGREF14, one should also notice that these two models actually overfit the training data, as their performance on the test set is much worse (cf. Table TABREF13). In contrast to the baselines which mitigate the KL collapse issue by carefully engineering the weight between the reconstruction loss and KL loss or choosing a different choice of prior, we provide a simple and elegant solution through holistic KL regularisation, which can effectively mitigate the KL collapse issue and achieve a better reconstruction error in both training and testing.
Sentence reconstruction. Lastly, we show some sentence examples reconstructed by vMF-VAE (i.e., the best baseline) and our model in the inputless setting using sentences from the E2E test set as input. As shown in Table TABREF15, the sentences generated by vMF-VAE contain repeated words in quite a few cases, such as `city city area' and `blue spice spice'. In addition, vMF-VAE also tends to generate unnecessary or unrelated words at the end of sentences, making the generated sentences ungrammatical. The sentences reconstructed by our model, in contrast, are more grammatical and more similar to the corresponding ground truth sentences than vMF-VAE.
Conclusion
In this paper, we present a simple and generic architecture called holistic regularisation VAE (HR-VAE), which can effectively avoid latent variable collapse. In contrast to existing VAE-RNN models which merely impose a standard normal distribution prior on the last hidden state of the RNN encoder, our HR-VAE model imposes regularisation on all the hidden states, allowing a better regularisation of the model learning process. Empirical results show that our model can effectively mitigate the latent variable collapse issue while giving a better predictive performance than the baselines.
Acknowledgment
This work is supported by the award made by the UK Engineering and Physical Sciences Research Council (Grant number: EP/P011829/1). | Yes |
a5418e4af99a2cbd6b7a2b8041388a2d01b8efb2 | a5418e4af99a2cbd6b7a2b8041388a2d01b8efb2_0 | Q: How do they evaluate generated text quality?
Text: Introduction
Variational Autoencoder (VAE) BIBREF1 is a powerful method for learning representations of high-dimensional data. However, recent attempts of applying VAEs to text modelling are still far less successful compared to its application to image and speech BIBREF2, BIBREF3, BIBREF4. When applying VAEs for text modelling, recurrent neural networks (RNNs) are commonly used as the architecture for both encoder and decoder BIBREF0, BIBREF5, BIBREF6. While such a VAE-RNN based architecture allows encoding and generating sentences (in the decoding phase) with variable-length effectively, it is also vulnerable to an issue known as latent variable collapse (or KL loss vanishing), where the posterior collapses to the prior and the model will ignore the latent codes in generative tasks.
Various efforts have been made to alleviate the latent variable collapse issue. BIBREF0 uses KL annealing, where a variable weight is added to the KL term in the cost function at training time. BIBREF7 discovered that there is a trade-off between the contextual capacity of the decoder and effective use of encoding information, and developed a dilated CNN as decoder which can vary the amount of conditioning context. They also introduced a loss clipping strategy in order to make the model more robust. BIBREF5 addressed the problem by replacing the standard normal distribution for the prior with the von Mises-Fisher (vMF) distribution. With vMF, the KL loss only depends on the concentration parameter which is fixed during training and testing, and hence results in a constant KL loss. In a more recent work, BIBREF6 avoided latent variable collapse by including skip connections in the generative model, where the skip connections enforce strong links between the latent variables and the likelihood function.
Although the aforementioned works show effectiveness in addressing the latent variable collapse issue to some extent, they either require carefully engineering to balance the weight between the reconstruction loss and KL loss BIBREF0, BIBREF8, or resort to designing more sophisticated model structures BIBREF7, BIBREF5, BIBREF6.
In this paper, we present a simple architecture called holistic regularisation VAE (HR-VAE), which can effectively avoid latent variable collapse. In contrast to existing VAE-RNN models for text modelling which merely impose a standard normal distribution prior on the last hidden state of the RNN encoder, our HR-VAE model imposes regularisation for all hidden states of the RNN encoder. Another advantage of our model is that it is generic and can be applied to any existing VAE-RNN-based architectures.
We evaluate our model against several strong baselines which apply VAE for text modelling BIBREF0, BIBREF7, BIBREF5. We conducted experiments based on two public benchmark datasets, namely, the Penn Treebank dataset BIBREF9 and the end-to-end (E2E) text generation dataset BIBREF10. Experimental results show that our HR-VAE model not only can effectively mitigate the latent variable collapse issue with a stable training process, but also can give better predictive performance than the baselines, as evidenced by both quantitative (e.g., negative log likelihood and perplexity) and qualitative evaluation. The code for our model is available online.
Methodology ::: Background of VAE
A variational autoencoder (VAE) is a deep generative model, which combines variational inference with deep learning. The VAE modifies the conventional autoencoder architecture by replacing the deterministic latent representation $\mathbf {z}$ of an input $\mathbf {x}$ with a posterior distribution $P(\mathbf {z}|\mathbf {x})$, and imposing a prior distribution on the posterior, such that the model allows sampling from any point of the latent space and yet able to generate novel and plausible output. The prior is typically chosen to be standard normal distributions, i.e., $P(\mathbf {z}) = \mathcal {N}(\mathbf {0},\mathbf {1})$, such that the KL divergence between posterior and prior can be computed in closed form BIBREF1.
To train a VAE, we need to optimise the marginal likelihood $P_{\theta }(\mathbf {x})=\int {P(\mathbf {z})P_{\theta }(\mathbf {x}|\mathbf {z})d\mathbf {z}}$, where the log likelihood can take following form:
Here $Q_{\phi }(\mathbf {z}|\mathbf {x})$ is the variational approximation for the true posterior $P_{\theta }(\mathbf {z}|\mathbf {x})$. Specifically, $Q_{\phi }(\mathbf {z}|\mathbf {x})$ can be regarded as an encoder (a.k.a. the recognition model) and $P_{\theta }(\mathbf {x}|\mathbf {z})$ the decoder (a.k.a. the generative model). Both encoder and decoder are implemented via neural networks. As proved in BIBREF1, optimising the marginal log likelihood is essentially equivalent to maximising $\mathcal {L}(\theta ,\phi ;\mathbf {x})$, i.e., the evidence lower bound (ELBO), which consists of two terms. The first term is the expected reconstruction error indicating how well the model can reconstruct data given a latent variable. The the second term is the KL divergence of the approximate posterior from prior, i.e., a regularisation pushing the learned posterior to be as close to the prior as possible.
Methodology ::: Variational Autoendoder with Holistic Regularisation
In this section, we discuss the technical details of the proposed holistic regularisation VAE (HR-VAE) model, a general architecture which can effectively mitigate the KL vanishing phenomenon.
Our model design is motivated by one noticeable defect shared by the VAE-RNN based models in previous works BIBREF0, BIBREF7, BIBREF5, BIBREF6. That is, all these models, as shown in Figure FIGREF2, only impose a standard normal distribution prior on the last hidden state of the RNN encoder, which potentially leads to learning a suboptimal representation of the latent variable and results in model vulnerable to KL loss vanishing. Our hypothesis is that to learn a good representation of data and a good generative model, it is crucial to impose the standard normal prior on all the hidden states of the RNN-based encoder (see Figure FIGREF2), which allows a better regularisation of the model learning process.
We implement the HR-VAE model using a two-layer LSTM for both the encoder and decoder. However, one should note that our architecture can be readily applied to other types of RNN such as GRU. For each time stamp $t$ (see Figure FIGREF2), we concatenate the hidden state $\mathbf {h}_t$ and the cell state $\mathbf {c}_t$ of the encoder. The concatenation (i.e., $[\mathbf {h}_t;\mathbf {c}_t]$) is then fed into two linear transformation layers for estimating $\mu _t$ and $\sigma ^2_t$, which are parameters of a normal distribution corresponding to the concatenation of $\mathbf {h}_t$ and $\mathbf {c}_t$. Let $Q_{\phi _t}(\mathbf {z}_t | \mathbf {x})=\mathcal {N}(\mathbf {z}_t|\mu _t,\sigma ^2_t)$, we wish $Q_{\phi _t}(\mathbf {z}_t | \mathbf {x})$ to be close to a prior $P(\mathbf {z}_t)$, which is a standard Gaussian. Finally, the KL divergence between these two multivariate Gaussian distributions (i.e., $Q_{\phi _t}$ and $P(\mathbf {z}_t)$) will contribute to the overall KL loss of the ELBO. By taking the average of the KL loss at each time stamp $t$, the resulting ELBO takes the following form
As can be seen in Eq. DISPLAY_FORM10, our solution to the KL collapse issue does not require any engineering for balancing the weight between the reconstruction term and KL loss as commonly the case in existing works BIBREF0, BIBREF8. The weight between these two terms of our model is simply $1:1$.
Experimental Setup ::: Datasets
We evaluate our model on two public datasets, namely, Penn Treebank (PTB) BIBREF9 and the end-to-end (E2E) text generation corpus BIBREF10, which have been used in a number of previous works for text generation BIBREF0, BIBREF5, BIBREF11, BIBREF12. PTB consists of more than 40,000 sentences from Wall Street Journal articles whereas the E2E dataset contains over 50,000 sentences of restaurant reviews. The statistics of these two datasets are summarised in Table TABREF11.
Experimental Setup ::: Implementation Details
For the PTB dataset, we used the train-test split following BIBREF0, BIBREF5. For the E2E dataset, we used the train-test split from the original dataset BIBREF10 and indexed the words with a frequency higher than 3. We represent input data with 512-dimensional word2vec embeddings BIBREF13. We set the dimension of the hidden layers of both encoder and decoder to 256. The Adam optimiser BIBREF14 was used for training with an initial learning rate of 0.0001. Each utterance in a mini-batch was padded to the maximum length for that batch, and the maximum batch-size allowed is 128.
Experimental Setup ::: Baselines
We compare our HR-VAE model with three strong baselines using VAE for text modelling:
VAE-LSTM-base: A variational autoencoder model which uses LSTM for both encoder and decoder. KL annealing is used to tackled the latent variable collapse issue BIBREF0;
VAE-CNN: A variational autoencoder model with a LSTM encoder and a dilated CNN decoder BIBREF7;
vMF-VAE: A variational autoencoder model using LSTM for both encoder and decoder where the prior distribution is the von Mises-Fisher (vMF) distribution rather than a Gaussian distribution BIBREF5.
Experimental Results
We evaluate our HR-VAE model in two experimental settings, following the setup of BIBREF0, BIBREF5. In the standard setting, the input to the decoder at each time stamp is the concatenation of latent variable $\mathbf {z}$ and the ground truth word of the previous time stamp. Under this setting, the decoder will be more powerful because it uses the ground truth word as input, resulting in little information of the training data captured by latent variable $\mathbf {z}$. The inputless setting, in contrast, does not use the previous ground truth word as input for the decoder. In other words, the decoder needs to predict the entire sequence with only the help of the given latent variable $\mathbf {z}$. In this way, a high-quality representation abstracting the information of the input sentence is much needed for the decoder, and hence enforcing $\mathbf {z}$ to learn the required information.
Overall performance. Table TABREF13 shows the language modelling results of our approach and the baselines. We report negative log likelihood (NLL), KL loss, and perplexity (PPL) on the test set. As expected, all the models have a higher KL loss in the inputless setting than the standard setting, as $\mathbf {z}$ is required to encode more information about the input data for reconstruction. In terms of overall performance, our model outperforms all the baselines in both datasets (i.e., PTB and E2E). For instance, when comparing with the strongest baseline vMF-VAE in the standard setting, our model reduces NLL from 96 to 79 and PPL from 98 to 43 in PTB, respectively. In the inputless setting, our performance gain is even higher, i.e., NLL reduced from 117 to 85 and PPL from 262 to 54. A similar pattern can be observed for the E2E dataset. These observations suggest that our approach can learn a better generative model for data.
Loss analysis. To conduct a more thorough evaluation, we further investigate model behaviours in terms of both reconstruction loss and KL loss, as shown in Figure FIGREF14. These plots were obtained based on the E2E training set using the inputless setting.
We can see that the KL loss of VAE-LSTM-base, which uses Sigmoid annealing BIBREF0, collapses to zero, leading to a poor generative performance as indicated by the high reconstruction loss. The KL loss for both VAE-CNN and vMF-VAE are nonzero, where the former mitigates the KL collapse issue with a KL loss clipping strategy and the latter by replacing the standard normal distribution for the prior with the vMF distribution (i.e., with the vMF distribution, the KL loss only depends on a fixed concentration parameter, and hence results in a constant KL loss). Although both VAE-CNN and vMF-VAE outperform VAE-LSTM-base by a large margin in terms of reconstruction loss as shown in Figure FIGREF14, one should also notice that these two models actually overfit the training data, as their performance on the test set is much worse (cf. Table TABREF13). In contrast to the baselines which mitigate the KL collapse issue by carefully engineering the weight between the reconstruction loss and KL loss or choosing a different choice of prior, we provide a simple and elegant solution through holistic KL regularisation, which can effectively mitigate the KL collapse issue and achieve a better reconstruction error in both training and testing.
Sentence reconstruction. Lastly, we show some sentence examples reconstructed by vMF-VAE (i.e., the best baseline) and our model in the inputless setting using sentences from the E2E test set as input. As shown in Table TABREF15, the sentences generated by vMF-VAE contain repeated words in quite a few cases, such as `city city area' and `blue spice spice'. In addition, vMF-VAE also tends to generate unnecessary or unrelated words at the end of sentences, making the generated sentences ungrammatical. The sentences reconstructed by our model, in contrast, are more grammatical and more similar to the corresponding ground truth sentences than vMF-VAE.
Conclusion
In this paper, we present a simple and generic architecture called holistic regularisation VAE (HR-VAE), which can effectively avoid latent variable collapse. In contrast to existing VAE-RNN models which merely impose a standard normal distribution prior on the last hidden state of the RNN encoder, our HR-VAE model imposes regularisation on all the hidden states, allowing a better regularisation of the model learning process. Empirical results show that our model can effectively mitigate the latent variable collapse issue while giving a better predictive performance than the baselines.
Acknowledgment
This work is supported by the award made by the UK Engineering and Physical Sciences Research Council (Grant number: EP/P011829/1). | Loss analysis. To conduct a more thorough evaluation, we further investigate model behaviours in terms of both reconstruction loss and KL loss, as shown in Figure FIGREF14. These plots were obtained based on the E2E training set using the inputless setting. |
b540cd4fe9dc4394f64d5b76b0eaa4d9e30fb728 | b540cd4fe9dc4394f64d5b76b0eaa4d9e30fb728_0 | Q: Could you tell me more about the metrics used for performance evaluation?
Text: Introduction
With the growing amount of biomedical information available in textual form, there have been significant advances in the development of pre-training language representations that can be applied to a range of different tasks in the biomedical domain, such as pre-trained word embeddings, sentence embeddings, and contextual representations BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 .
In the general domain, we have recently observed that the General Language Understanding Evaluation (GLUE) benchmark BIBREF5 has been successfully promoting the development of language representations of general purpose BIBREF2 , BIBREF6 , BIBREF7 . To the best of our knowledge, however, there is no publicly available benchmarking in the biomedicine domain.
To facilitate research on language representations in the biomedicine domain, we present the Biomedical Language Understanding Evaluation (BLUE) benchmark, which consists of five different biomedicine text-mining tasks with ten corpora. Here, we rely on preexisting datasets because they have been widely used by the BioNLP community as shared tasks BIBREF8 . These tasks cover a diverse range of text genres (biomedical literature and clinical notes), dataset sizes, and degrees of difficulty and, more importantly, highlight common biomedicine text-mining challenges. We expect that the models that perform better on all or most tasks in BLUE will address other biomedicine tasks more robustly.
To better understand the challenge posed by BLUE, we conduct experiments with two baselines: One makes use of the BERT model BIBREF7 and one makes use of ELMo BIBREF2 . Both are state-of-the-art language representation models and demonstrate promising results in NLP tasks of general purpose. We find that the BERT model pre-trained on PubMed abstracts BIBREF9 and MIMIC-III clinical notes BIBREF10 achieves the best results, and is significantly superior to other models in the clinical domain. This demonstrates the importance of pre-training among different text genres.
In summary, we offer: (i) five tasks with ten biomedical and clinical text-mining corpora with different sizes and levels of difficulty, (ii) codes for data construction and model evaluation for fair comparisons, (iii) pretrained BERT models on PubMed abstracts and MIMIC-III, and (iv) baseline results.
Related work
There is a long history of using shared language representations to capture text semantics in biomedical text and data mining research. Such research utilizes a technique, termed transfer learning, whereby the language representations are pre-trained on large corpora and fine-tuned in a variety of downstream tasks, such as named entity recognition and relation extraction.
One established trend is a form of word embeddings that represent the semantic, using high dimensional vectors BIBREF0 , BIBREF11 , BIBREF12 . Similar methods also have been derived to improve embeddings of word sequences by introducing sentence embeddings BIBREF1 . They always, however, require complicated neural networks to be effectively used in downstream applications.
Another popular trend, especially in recent years, is the context-dependent representation. Different from word embeddings, it allows the meaning of a word to change according to the context in which it is used BIBREF13 , BIBREF2 , BIBREF7 , BIBREF14 . In the scientific domain, BIBREF15 released SciBERT which is trained on scientific text. In the biomedical domain, BioBERT BIBREF3 and BioELMo BIBREF16 were pre-trained and applied to several specific tasks. In the clinical domain, BIBREF17 released a clinical BERT base model trained on the MIMIC-III database. Most of these works, however, were evaluated on either different datasets or the same dataset with slightly different sizes of examples. This makes it challenging to fairly compare various language models.
Based on these reasons, a standard benchmarking is urgently required. Parallel to our work, BIBREF3 introduced three tasks: named entity recognition, relation extraction, and QA, while BIBREF16 introduced NLI in addition to named entity recognition. To this end, we deem that BLUE is different in three ways. First, BLUE is selected to cover a diverse range of text genres, including both biomedical and clinical domains. Second, BLUE goes beyond sentence or sentence pairs by including document classification tasks. Third, BLUE provides a comprehensive suite of codes to reconstruct dataset from scratch without removing any instances.
Tasks
BLUE contains five tasks with ten corpora that cover a broad range of data quantities and difficulties (Table 1 ). Here, we rely on preexisting datasets because they have been widely used by the BioNLP community as shared tasks.
Sentence similarity
The sentence similarity task is to predict similarity scores based on sentence pairs. Following common practice, we evaluate similarity by using Pearson correlation coefficients.
BIOSSES is a corpus of sentence pairs selected from the Biomedical Summarization Track Training Dataset in the biomedical domain BIBREF18 . To develop BIOSSES, five curators judged their similarity, using scores that ranged from 0 (no relation) to 4 (equivalent). Here, we randomly select 80% for training and 20% for testing because there is no standard splits in the released data.
MedSTS is a corpus of sentence pairs selected from Mayo Clinic’s clinical data warehouse BIBREF19 . To develop MedSTS, two medical experts graded the sentence's semantic similarity scores from 0 to 5 (low to high similarity). We use the standard training and testing sets in the shared task.
Named entity recognition
The aim of the named entity recognition task is to predict mention spans given in the text BIBREF20 . The results are evaluated through a comparison of the set of mention spans annotated within the document with the set of mention spans predicted by the model. We evaluate the results by using the strict version of precision, recall, and F1-score. For disjoint mentions, all spans also must be strictly correct. To construct the dataset, we used spaCy to split the text into a sequence of tokens when the original datasets do not provide such information.
BC5CDR is a collection of 1,500 PubMed titles and abstracts selected from the CTD-Pfizer corpus and was used in the BioCreative V chemical-disease relation task BIBREF21 . The diseases and chemicals mentioned in the articles were annotated independently by two human experts with medical training and curation experience. We use the standard training and test set in the BC5CDR shared task BIBREF22 .
ShARe/CLEF eHealth Task 1 Corpus is a collection of 299 deidentified clinical free-text notes from the MIMIC II database BIBREF23 . The disorders mentioned in the clinical notes were annotated by two professionally trained annotators, followed by an adjudication step, resulting in high inter-annotator agreement. We use the standard training and test set in the ShARe/CLEF eHealth Tasks 1.
Relation extraction
The aim of the relation extraction task is to predict relations and their types between the two entities mentioned in the sentences. The relations with types were compared to annotated data. We use the standard micro-average precision, recall, and F1-score metrics.
DDI extraction 2013 corpus is a collection of 792 texts selected from the DrugBank database and other 233 Medline abstracts BIBREF24 . The drug-drug interactions, including both pharmacokinetic and pharmacodynamic interactions, were annotated by two expert pharmacists with a substantial background in pharmacovigilance. In our benchmark, we use 624 train files and 191 test files to evaluate the performance and report the micro-average F1-score of the four DDI types.
ChemProt consists of 1,820 PubMed abstracts with chemical-protein interactions annotated by domain experts and was used in the BioCreative VI text mining chemical-protein interactions shared task BIBREF25 . We use the standard training and test sets in the ChemProt shared task and evaluate the same five classes: CPR:3, CPR:4, CPR:5, CPR:6, and CPR:9.
i2b2 2010 shared task collection consists of 170 documents for training and 256 documents for testing, which is the subset of the original dataset BIBREF26 . The dataset was collected from three different hospitals and was annotated by medical practitioners for eight types of relations between problems and treatments.
Document multilabel classification
The multilabel classification task predicts multiple labels from the texts.
HoC (the Hallmarks of Cancers corpus) consists of 1,580 PubMed abstracts annotated with ten currently known hallmarks of cancer BIBREF27 . Annotation was performed at sentence level by an expert with 15+ years of experience in cancer research. We use 315 ( $\sim $ 20%) abstracts for testing and the remaining abstracts for training. For the HoC task, we followed the common practice and reported the example-based F1-score on the abstract level BIBREF28 , BIBREF29 .
Inference task
The aim of the inference task is to predict whether the premise sentence entails or contradicts the hypothesis sentence. We use the standard overall accuracy to evaluate the performance.
MedNLI is a collection of sentence pairs selected from MIMIC-III BIBREF30 . Given a premise sentence and a hypothesis sentence, two board-certified radiologists graded whether the task predicted whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). We use the same training, development, and test sets in Romanov and Shivade BIBREF30 .
Total score
Following the practice in BIBREF5 and BIBREF3 , we use a macro-average of F1-scores and Pearson scores to determine a system's position.
Baselines
For baselines, we evaluate several pre-training models as described below. The original code for the baselines is available at https://github.com/ncbi-nlp/NCBI_BERT.
BERT
BERT BIBREF7 is a contextualized word representation model that is pre-trained based on a masked language model, using bidirectional Transformers BIBREF31 .
In this paper, we pre-trained our own model BERT on PubMed abstracts and clinical notes (MIMIC-III). The statistics of the text corpora on which BERT was pre-trained are shown in Table 2 .
We initialized BERT with pre-trained BERT provided by BIBREF7 . We then continue to pre-train the model, using the listed corpora.
We released our BERT-Base and BERT-Large models, using the same vocabulary, sequence length, and other configurations provided by BIBREF7 . Both models were trained with 5M steps on the PubMed corpus and 0.2M steps on the MIMIC-III corpus.
BERT is applied to various downstream text-mining tasks while requiring only minimal architecture modification.
For sentence similarity tasks, we packed the sentence pairs together into a single sequence, as suggested in BIBREF7 .
For named entity recognition, we used the BIO tags for each token in the sentence. We considered the tasks similar to machine translation, as predicting the sequence of BIO tags from the input sentence.
We treated the relation extraction task as a sentence classification by replacing two named entity mentions of interest in the sentence with pre-defined tags (e.g., @GENE$, @DRUG$) BIBREF3 . For example, we used “@CHEMICAL$ protected against the RTI-76-induced inhibition of @GENE$ binding.” to replace the original sentence “Citalopram protected against the RTI-76-induced inhibition of SERT binding.” in which “citalopram” and “SERT” has a chemical-gene relation.
For multi-label tasks, we fine-tuned the model to predict multi-labels for each sentence in the document. We then combine the labels in one document and compare them with the gold-standard.
Like BERT, we provided sources code for fine-tuning, prediction, and evaluation to make it straightforward to follow those examples to use our BERT pre-trained models for all tasks. | BLUE utilizes different metrics for each of the tasks: Pearson correlation coefficient, F-1 scores, micro-averaging, and accuracy |
41173179efa6186eef17c96f7cbd8acb29105b0e | 41173179efa6186eef17c96f7cbd8acb29105b0e_0 | Q: which tasks are used in BLUE benchmark?
Text: Introduction
With the growing amount of biomedical information available in textual form, there have been significant advances in the development of pre-training language representations that can be applied to a range of different tasks in the biomedical domain, such as pre-trained word embeddings, sentence embeddings, and contextual representations BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 .
In the general domain, we have recently observed that the General Language Understanding Evaluation (GLUE) benchmark BIBREF5 has been successfully promoting the development of language representations of general purpose BIBREF2 , BIBREF6 , BIBREF7 . To the best of our knowledge, however, there is no publicly available benchmarking in the biomedicine domain.
To facilitate research on language representations in the biomedicine domain, we present the Biomedical Language Understanding Evaluation (BLUE) benchmark, which consists of five different biomedicine text-mining tasks with ten corpora. Here, we rely on preexisting datasets because they have been widely used by the BioNLP community as shared tasks BIBREF8 . These tasks cover a diverse range of text genres (biomedical literature and clinical notes), dataset sizes, and degrees of difficulty and, more importantly, highlight common biomedicine text-mining challenges. We expect that the models that perform better on all or most tasks in BLUE will address other biomedicine tasks more robustly.
To better understand the challenge posed by BLUE, we conduct experiments with two baselines: One makes use of the BERT model BIBREF7 and one makes use of ELMo BIBREF2 . Both are state-of-the-art language representation models and demonstrate promising results in NLP tasks of general purpose. We find that the BERT model pre-trained on PubMed abstracts BIBREF9 and MIMIC-III clinical notes BIBREF10 achieves the best results, and is significantly superior to other models in the clinical domain. This demonstrates the importance of pre-training among different text genres.
In summary, we offer: (i) five tasks with ten biomedical and clinical text-mining corpora with different sizes and levels of difficulty, (ii) codes for data construction and model evaluation for fair comparisons, (iii) pretrained BERT models on PubMed abstracts and MIMIC-III, and (iv) baseline results.
Related work
There is a long history of using shared language representations to capture text semantics in biomedical text and data mining research. Such research utilizes a technique, termed transfer learning, whereby the language representations are pre-trained on large corpora and fine-tuned in a variety of downstream tasks, such as named entity recognition and relation extraction.
One established trend is a form of word embeddings that represent the semantic, using high dimensional vectors BIBREF0 , BIBREF11 , BIBREF12 . Similar methods also have been derived to improve embeddings of word sequences by introducing sentence embeddings BIBREF1 . They always, however, require complicated neural networks to be effectively used in downstream applications.
Another popular trend, especially in recent years, is the context-dependent representation. Different from word embeddings, it allows the meaning of a word to change according to the context in which it is used BIBREF13 , BIBREF2 , BIBREF7 , BIBREF14 . In the scientific domain, BIBREF15 released SciBERT which is trained on scientific text. In the biomedical domain, BioBERT BIBREF3 and BioELMo BIBREF16 were pre-trained and applied to several specific tasks. In the clinical domain, BIBREF17 released a clinical BERT base model trained on the MIMIC-III database. Most of these works, however, were evaluated on either different datasets or the same dataset with slightly different sizes of examples. This makes it challenging to fairly compare various language models.
Based on these reasons, a standard benchmarking is urgently required. Parallel to our work, BIBREF3 introduced three tasks: named entity recognition, relation extraction, and QA, while BIBREF16 introduced NLI in addition to named entity recognition. To this end, we deem that BLUE is different in three ways. First, BLUE is selected to cover a diverse range of text genres, including both biomedical and clinical domains. Second, BLUE goes beyond sentence or sentence pairs by including document classification tasks. Third, BLUE provides a comprehensive suite of codes to reconstruct dataset from scratch without removing any instances.
Tasks
BLUE contains five tasks with ten corpora that cover a broad range of data quantities and difficulties (Table 1 ). Here, we rely on preexisting datasets because they have been widely used by the BioNLP community as shared tasks.
Sentence similarity
The sentence similarity task is to predict similarity scores based on sentence pairs. Following common practice, we evaluate similarity by using Pearson correlation coefficients.
BIOSSES is a corpus of sentence pairs selected from the Biomedical Summarization Track Training Dataset in the biomedical domain BIBREF18 . To develop BIOSSES, five curators judged their similarity, using scores that ranged from 0 (no relation) to 4 (equivalent). Here, we randomly select 80% for training and 20% for testing because there is no standard splits in the released data.
MedSTS is a corpus of sentence pairs selected from Mayo Clinic’s clinical data warehouse BIBREF19 . To develop MedSTS, two medical experts graded the sentence's semantic similarity scores from 0 to 5 (low to high similarity). We use the standard training and testing sets in the shared task.
Named entity recognition
The aim of the named entity recognition task is to predict mention spans given in the text BIBREF20 . The results are evaluated through a comparison of the set of mention spans annotated within the document with the set of mention spans predicted by the model. We evaluate the results by using the strict version of precision, recall, and F1-score. For disjoint mentions, all spans also must be strictly correct. To construct the dataset, we used spaCy to split the text into a sequence of tokens when the original datasets do not provide such information.
BC5CDR is a collection of 1,500 PubMed titles and abstracts selected from the CTD-Pfizer corpus and was used in the BioCreative V chemical-disease relation task BIBREF21 . The diseases and chemicals mentioned in the articles were annotated independently by two human experts with medical training and curation experience. We use the standard training and test set in the BC5CDR shared task BIBREF22 .
ShARe/CLEF eHealth Task 1 Corpus is a collection of 299 deidentified clinical free-text notes from the MIMIC II database BIBREF23 . The disorders mentioned in the clinical notes were annotated by two professionally trained annotators, followed by an adjudication step, resulting in high inter-annotator agreement. We use the standard training and test set in the ShARe/CLEF eHealth Tasks 1.
Relation extraction
The aim of the relation extraction task is to predict relations and their types between the two entities mentioned in the sentences. The relations with types were compared to annotated data. We use the standard micro-average precision, recall, and F1-score metrics.
DDI extraction 2013 corpus is a collection of 792 texts selected from the DrugBank database and other 233 Medline abstracts BIBREF24 . The drug-drug interactions, including both pharmacokinetic and pharmacodynamic interactions, were annotated by two expert pharmacists with a substantial background in pharmacovigilance. In our benchmark, we use 624 train files and 191 test files to evaluate the performance and report the micro-average F1-score of the four DDI types.
ChemProt consists of 1,820 PubMed abstracts with chemical-protein interactions annotated by domain experts and was used in the BioCreative VI text mining chemical-protein interactions shared task BIBREF25 . We use the standard training and test sets in the ChemProt shared task and evaluate the same five classes: CPR:3, CPR:4, CPR:5, CPR:6, and CPR:9.
i2b2 2010 shared task collection consists of 170 documents for training and 256 documents for testing, which is the subset of the original dataset BIBREF26 . The dataset was collected from three different hospitals and was annotated by medical practitioners for eight types of relations between problems and treatments.
Document multilabel classification
The multilabel classification task predicts multiple labels from the texts.
HoC (the Hallmarks of Cancers corpus) consists of 1,580 PubMed abstracts annotated with ten currently known hallmarks of cancer BIBREF27 . Annotation was performed at sentence level by an expert with 15+ years of experience in cancer research. We use 315 ( $\sim $ 20%) abstracts for testing and the remaining abstracts for training. For the HoC task, we followed the common practice and reported the example-based F1-score on the abstract level BIBREF28 , BIBREF29 .
Inference task
The aim of the inference task is to predict whether the premise sentence entails or contradicts the hypothesis sentence. We use the standard overall accuracy to evaluate the performance.
MedNLI is a collection of sentence pairs selected from MIMIC-III BIBREF30 . Given a premise sentence and a hypothesis sentence, two board-certified radiologists graded whether the task predicted whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). We use the same training, development, and test sets in Romanov and Shivade BIBREF30 .
Total score
Following the practice in BIBREF5 and BIBREF3 , we use a macro-average of F1-scores and Pearson scores to determine a system's position.
Baselines
For baselines, we evaluate several pre-training models as described below. The original code for the baselines is available at https://github.com/ncbi-nlp/NCBI_BERT.
BERT
BERT BIBREF7 is a contextualized word representation model that is pre-trained based on a masked language model, using bidirectional Transformers BIBREF31 .
In this paper, we pre-trained our own model BERT on PubMed abstracts and clinical notes (MIMIC-III). The statistics of the text corpora on which BERT was pre-trained are shown in Table 2 .
We initialized BERT with pre-trained BERT provided by BIBREF7 . We then continue to pre-train the model, using the listed corpora.
We released our BERT-Base and BERT-Large models, using the same vocabulary, sequence length, and other configurations provided by BIBREF7 . Both models were trained with 5M steps on the PubMed corpus and 0.2M steps on the MIMIC-III corpus.
BERT is applied to various downstream text-mining tasks while requiring only minimal architecture modification.
For sentence similarity tasks, we packed the sentence pairs together into a single sequence, as suggested in BIBREF7 .
For named entity recognition, we used the BIO tags for each token in the sentence. We considered the tasks similar to machine translation, as predicting the sequence of BIO tags from the input sentence.
We treated the relation extraction task as a sentence classification by replacing two named entity mentions of interest in the sentence with pre-defined tags (e.g., @GENE$, @DRUG$) BIBREF3 . For example, we used “@CHEMICAL$ protected against the RTI-76-induced inhibition of @GENE$ binding.” to replace the original sentence “Citalopram protected against the RTI-76-induced inhibition of SERT binding.” in which “citalopram” and “SERT” has a chemical-gene relation.
For multi-label tasks, we fine-tuned the model to predict multi-labels for each sentence in the document. We then combine the labels in one document and compare them with the gold-standard.
Like BERT, we provided sources code for fine-tuning, prediction, and evaluation to make it straightforward to follow those examples to use our BERT pre-trained models for all tasks. | Inference task
The aim of the inference task is to predict whether the premise sentence entails or contradicts the hypothesis sentence, Document multilabel classification
The multilabel classification task predicts multiple labels from the texts., Relation extraction
The aim of the relation extraction task is to predict relations and their types between the two entities mentioned in the sentences., Named entity recognition
The aim of the named entity recognition task is to predict mention spans given in the text , Sentence similarity
The sentence similarity task is to predict similarity scores based on sentence pairs |
0bd683c51a87a110b68b377e9a06f0a3e12c8da0 | 0bd683c51a87a110b68b377e9a06f0a3e12c8da0_0 | Q: What are the tasks that this method has shown improvements?
Text: Introduction
Word embeddings are one of the most widely used resources in NLP, as they have proven to be of enormous importance for modeling linguistic phenomena in both supervised and unsupervised settings. In particular, the representation of words in cross-lingual vector spaces (henceforth, cross-lingual word embeddings) is quickly gaining in popularity. One of the main reasons is that they play a crucial role in transferring knowledge from one language to another, specifically in downstream tasks such as information retrieval BIBREF0 , entity linking BIBREF1 and text classification BIBREF2 , while at the same time providing improvements in multilingual NLP problems such as machine translation BIBREF3 .
There exist different approaches for obtaining these cross-lingual embeddings. One of the most successful methodological directions, which constitutes the main focus of this paper, attempts to learn bilingual embeddings via a two-step process: first, word embeddings are trained on monolingual corpora and then the resulting monolingual spaces are aligned by taking advantage of bilingual dictionaries BIBREF4 , BIBREF5 , BIBREF6 .
These alignments are generally modeled as linear transformations, which are constrained such that the structure of the initial monolingual spaces is left unchanged. This can be achieved by imposing an orthogonality constraint on the linear transformation BIBREF6 , BIBREF7 . Our hypothesis in this paper is that such approaches can be further improved, as they rely on the assumption that the internal structure of the two monolingual spaces is identical. In reality, however, this structure is influenced by language-specific phenomena, e.g., the fact that Spanish distinguishes between masculine and feminine nouns BIBREF8 as well as the specific biases of the different corpora from which the monolingual spaces were learned. Because of this, monolingual embedding spaces are not isomorphic BIBREF9 , BIBREF10 . On the other hand, simply dropping the orthogonality constraints leads to overfitting, and is thus not effective in practice.
The solution we propose is to start with existing state-of-the-art alignment models BIBREF11 , BIBREF12 , and to apply a further transformation to the resulting initial alignment. For each word $w$ with translation $w^{\prime }$ , this additional transformation aims to map the vector representations of both $w$ and $w^{\prime }$ onto their average, thereby creating a cross-lingual vector space which intuitively corresponds to the average of the two aligned monolingual vector spaces. Similar to the initial alignment, this mapping is learned from a small bilingual lexicon.
Our experimental results show that the proposed additional transformation does not only benefit cross-lingual evaluation tasks, but, perhaps surprisingly, also monolingual ones. In particular, we perform an extensive set of experiments on standard benchmarks for bilingual dictionary induction and monolingual and cross-lingual word similarity, as well as on an extrinsic task: cross-lingual hypernym discovery.
Code and pre-trained embeddings to reproduce our experiments and to apply our model to any given cross-lingual embeddings are available at https://github.com/yeraidm/meemi.
Related Work
Bilingual word embeddings have been extensively studied in the literature in recent years. Their nature varies with respect to the supervision signals used for training BIBREF13 , BIBREF14 . Some common signals to learn bilingual embeddings come from parallel BIBREF15 , BIBREF16 , BIBREF17 or comparable corpora BIBREF18 , BIBREF19 , BIBREF20 , or lexical resources such as WordNet, ConceptNet or BabelNet BIBREF21 , BIBREF22 , BIBREF23 . However, these sources of supervision may be scarce, limited to certain domains or may not be directly available for certain language pairs.
Another branch of research exploits pre-trained monolingual embeddings with weak signals such as bilingual lexicons for learning bilingual embeddings BIBREF4 , BIBREF5 , BIBREF24 , BIBREF7 . mikolov2013exploiting was one of the first attempts into this line of research, applying a linear transformation in order to map the embeddings from one monolingual space into another. They also noted that more sophisticated approaches, such as using multilayer perceptrons, do not improve with respect to their linear counterparts. xing2015normalized built upon this work by normalizing word embeddings during training and adding an orthogonality constraint. In a complementary direction, faruqui2014improving put forward a technique based on canonical correlation analysis to obtain linear mappings for both monolingual embedding spaces into a new shared space. artetxe2016learning proposed a similar linear mapping to mikolov2013exploiting, generalizing it and providing theoretical justifications which also served to reinterpret the methods of faruqui2014improving and xing2015normalized. smith2017offline further showed how orthogonality was required to improve the consistency of bilingual mappings, making them more robust to noise. Finally, a more complete generalization providing further insights on the linear transformations used in all these models can be found in artetxe2018generalizing.
These approaches generally require large bilingual lexicons to effectively learn multilingual embeddings BIBREF11 . Recently, however, alternatives which only need very small dictionaries, or even none at all, have been proposed to learn high-quality embeddings via linear mappings BIBREF11 , BIBREF12 . More details on the specifics of these two approaches can be found in Section "Aligning monolingual spaces" . These models have in turn paved the way for the development of machine translation systems which do not require any parallel corpora BIBREF25 , BIBREF26 . Moreover, the fact that such approaches only need monolingual embeddings, instead of parallel or comparable corpora, makes them easily adaptable to different domains (e.g., social media or web corpora).
In this paper we build upon these state-of-the-art approaches by applying an additional transformation, which aims to map each word and its translation onto the average of their vector representations. This strategy bears some resemblance with the idea of learning meta-embeddings BIBREF27 . Meta-embeddings are vector space representations which aggregate several pre-trained word embeddings from a given language (e.g., trained using different corpora and/or different word embedding models). Empirically it was found that such meta-embeddings can often outperform the individual word embeddings from which they were obtained. In particular, it was recently argued that word vector averaging can be a highly effective approach for learning such meta-embeddings BIBREF28 . The main difference between such approaches and our work is that because we rely on a small dictionary, we cannot simply average word vectors, since for most words we do not know the corresponding translation. Instead, we train a regression model to predict this average word vector from the vector representation of the given word only, i.e., without using the vector representation of its translation.
Methodology
Our approach for improving cross-lingual embeddings consists of three main steps, where the first two steps are the same as in existing methods. In particular, given two monolingual corpora, a word vector space is first learned independently for each language. This can be achieved with common word embedding models, e.g., Word2vec BIBREF29 , GloVe BIBREF30 or FastText BIBREF31 . Second, a linear alignment strategy is used to map the monolingual embeddings to a common bilingual vector space (Section "Aligning monolingual spaces" ). Third, a final transformation is applied on the aligned embeddings so the word vectors from both languages are refined and further integrated with each other (Section "Conclusions and Future Work" ). This third step is the main contribution of our paper.
Aligning monolingual spaces
Once the monolingual word embeddings have been obtained, a linear transformation is applied in order to integrate them into the same vector space. This linear transformation is generally carried out using a supervision signal, typically in the form of a bilingual dictionary. In the following we explain two state-of-the-art models performing this linear transformation.
VecMap uses an orthogonal transformation over normalized word embeddings. An iterative two-step procedure is also implemented in order to avoid the need of starting with a large seed dictionary (e.g., in the original paper it was tested with a very small bilingual dictionary of just 25 pairs). In this procedure, first, the linear mapping is estimated using a small bilingual dictionary, and then, this dictionary is augmented by applying the learned transformation to new words from the source language. Lastly, the process is repeated until some convergence criterion is met.
In this case, the transformation matrix is learned through an iterative Procrustes alignment BIBREF32 . The anchor points needed for this alignment can be obtained either through a supplied bilingual dictionary or through an unsupervised model. This unsupervised model is trained using adversarial learning to obtain an initial alignment of the two monolingual spaces, which is then refined by the Procrustes alignment using the most frequent words as anchor points. A new distance metric for the embedding space, referred to as cross-domain similarity local scaling, is also introduced. This metric, which takes into account the nearest neighbors of both source and target words, was shown to better handle high-density regions of the space, thus alleviating the hubness problem of word embedding models BIBREF33 , BIBREF34 , which arises when a few points (known as hubs) become the nearest neighbors of many other points in the embedding space.
Meeting in the middle
After the initial alignment of the monolingual word embeddings, our proposed method leverages an additional linear model to refine the resulting bilingual word embeddings. This is because the methods presented in the previous section apply constraints to ensure that the structure of the monolingual embeddings is largely preserved. As already mentioned in the introduction, conceptually this may not be optimal, as embeddings for different languages and trained from different corpora can be expected to be structured somewhat differently. Empirically, as we will see in the evaluation, after applying methods such as VecMap and MUSE there still tend to be significant gaps between the vector representations of words and their translations. Our method directly attempts to reduce these gaps by moving each word vector towards the middle point between its current representation and the representation of its translation. In this way, by bringing the two monolingual fragments of the space closer to each other, we can expect to see an improved performance on cross-lingual evaluation tasks such as bilingual dictionary induction. Importantly, the internal structure of the two monolingual fragments themselves is also affected by this step. By averaging between the representations obtained from different languages, we hypothesize that the impact of language-specific phenomena and corpus specific biases will be reduced, thereby ending up with more “neutral” monolingual embeddings.
In the following, we detail our methodological approach. First, we leverage the same bilingual dictionary that was used to obtain the initial alignment (Section "Aligning monolingual spaces" ). Specifically, let $D=\lbrace (w,w^{\prime })\rbrace $ be the given bilingual dictionary, where $w \in V$ and $w^{\prime } \in V^{\prime }$ , with $V$ and $V^{\prime }$ representing the vocabulary of the first and second language, respectively. For pairs $(w,w^{\prime }) \in D$ , we can simply compute the corresponding average vector $\vec{\mu }_{w,w^{\prime }}=\frac{\vec{v}_w+\vec{v}_{w^{\prime }}}{2}$ . Then, using the pairs in $D$ as training data, we learn a linear mapping $X$ such that $X \vec{v}_w \approx \vec{\mu }_{w,w^{\prime }}$ for all $w \in V$0 . This mapping $w \in V$1 can then be used to predict the averages for words outside the given dictionary. To find the mapping $w \in V$2 , we solve the following least squares linear regression problem:
$$E=\sum _{(w,w^{\prime }) \in D} \Vert X\vec{w}-\vec{\mu }_ {w,w^{\prime }}\Vert ^2$$ (Eq. 6)
Similarly, for the other language, we separately learn a mapping $X^{\prime }$ such that $X^{\prime } \vec{v}_{w^{\prime }} \approx \vec{\mu }_{w,w^{\prime }}$ .
It is worth pointing out that we experimented with several variants of this linear regression formulation. For example, we also tried using a multilayer perceptron to learn non-linear mappings, and we experimented with several regularization terms to penalize mappings that deviate too much from the identity mapping. None of these variants, however, were found to improve on the much simpler formulation in ( 6 ), which can be solved exactly and efficiently. Furthermore, one may wonder whether the initial alignment is actually needed, since e.g., coates2018frustratingly obtained high-quality meta-embeddings without such an alignment set. However, when applying our approach directly to the initial monolingual non-aligned embedding spaces, we obtained results which were competitive but slightly below the two considered alignment strategies.
Evaluation
We test our bilingual embedding refinement approach on both intrinsic and extrinsic tasks. In Section "Cross-lingual embeddings training" we describe the common training setup for all experiments and language pairs. The languages we considered are English, Spanish, Italian, German and Finnish. Throughout all the experiments we use publicly available resources in order to make comparisons and reproducibility of our experiments easier.
Cross-lingual embeddings training
Corpora. In our experiments we make use of web-extracted corpora. For English we use the 3B-word UMBC WebBase Corpus BIBREF35 , while we chose the Spanish Billion Words Corpus BIBREF36 for Spanish. For Italian and German, we use the itWaC and sdeWaC corpora from the WaCky project BIBREF37 , containing 2 and 0.8 billion words, respectively. Lastly, for Finnish, we use the Common Crawl monolingual corpus from the Machine Translation of News Shared Task 2016, composed of 2.8B words. All corpora are tokenized and lowercased.
Monolingual embeddings. The monolingual word embeddings are trained with the Skipgram model from FastText BIBREF31 on the corpora described above. The dimensionality of the vectors was set to 300, with the default FastText hyperparameters.
Bilingual dictionaries. We use the bilingual dictionaries packaged together by artetxe-labaka-agirre:2017:Long, each one conformed by 5000 word translations. They are used both for the initial bilingual mappings and then again for our linear transformation.
Initial mapping. Following previous works, for the purpose of obtaining the initial alignment, English is considered as source language and the remaining languages are used as target. We make use of the open-source implementations of VecMap BIBREF11 and MUSE BIBREF12 , which constitute strong baselines for our experiments (cf. Section "Aligning monolingual spaces" ). Both of them were used with the recommended parameters and in their supervised setting, using the aforementioned bilingual dictionaries.
Meeting in the Middle. Then, once the initial cross-lingual embeddings are trained, and as explained in Section "Conclusions and Future Work" , we obtain our linear transformation by using the exact solution to the least squares linear regression problem. To this end, we use the same bilingual dictionaries as in the previous step. Henceforth, we will refer to our transformed models as VecMap $_\mu $ and MUSE $_\mu $ , depending on the initial mapping.
Experiments
We test our cross-lingual word embeddings in two intrinsic tasks, i.e., bilingual dictionary induction (Section UID14 ) and word similarity (Section UID20 ), and an extrinsic task, i.e., cross-lingual hypernym discovery (Section UID31 ).
The dictionary induction task consists in automatically generating a bilingual dictionary from a source to a target language, using as input a list of words in the source language.
For this task, and following previous works, we use the English-Italian test set released by dinu2015improving and those released by artetxe-labaka-agirre:2017:Long for the remaining language pairs. These test sets have no overlap with respect to the training and development sets, and contain around 1900 entries each. Given an input word from the source language, word translations are retrieved through a nearest-neighbor search of words in the target language, using cosine distance. Note that this gives us a ranked list of candidates for each word from the source language. Accordingly, the performance of the embeddings is evaluated with the precision at $k$ ( $P@k$ ) metric, which evaluates for what percentage of test pairs, the correct answer is among the $k$ highest ranked candidates.
As can be seen in Table 1 , our refinement method consistently improves over the baselines (i.e., VecMap and MUSE) on all language pairs and metrics. The higher scores indicate that the two monolingual embedding spaces become more tightly integrated because of our additional transformation. It is worth highlighting here the case of English-Finnish, where the gains obtained in $P@5$ and $P@10$ are considerable. This might indicate that our approach is especially useful for morphologically richer languages such as Finnish, where the limitations of the previous bilingual mappings are most apparent.
When analyzing the source of errors in $P@1$ , we came to similar conclusions as artetxe-labaka-agirre:2017:Long. Several source words are translated to words that are closely related to the one in the gold reference in the target language; e.g., for the English word essentially we obtain básicamente (basically) instead of fundamentalmente (fundamentally) in Spanish, both of them closely related, or the closest neighbor for dirt being mugre (dirt) instead of suciedad (dirt), which in fact was among the five closest neighbors. We can also find multiple examples of the higher performance of our models compared to the baselines. For instance, in the English-Spanish cross-lingual models, after the initial alignment, we can find that seconds has minutos (minutes) as nearest neighbour, but after applying our additional transformation, seconds becomes closest to segundos (seconds). Similarly, paint initially has tintado (tinted) as the closest Spanish word, and then pintura (paint).
We perform experiments on both monolingual and cross-lingual word similarity. In monolingual similarity, models are tested in their ability to determine the similarity between two words in the same language, whereas in cross-lingual similarity the words belong to different languages. While in the monolingual setting the main objective is to test the quality of the monolingual subsets of the bilingual vector space, the cross-lingual setting constitutes a straightforward benchmark to test the quality of bilingual embeddings.
For monolingual word similarity we use the English SimLex-999 BIBREF38 , and the language-specific versions of SemEval-17 BIBREF39 , WordSim-353 BIBREF40 , and RG-65 BIBREF41 . The corresponding cross-lingual datasets from SemEval-18, WordSim-353 and RG-65 were considered for the cross-lingual word similarity evaluation. Cosine similarity is again used as comparison measure.
Tables 2 and 3 show the monolingual and cross-lingual word similarity results, respectively. For both the monolingual and cross-lingual settings, we can notice that our models generally outperform the corresponding baselines. Moreover, in cases where no improvement is obtained, the differences tend to be minimal, with the exception of RG-65, but this is a very small test set for which larger variations can thus be expected. In contrast, there are a few cases where substantial gains were obtained by using our model. This is most notable for English WordSim and SimLex in the monolingual setting.
In order to further understand the movements of the space with respect to the original VecMap and MUSE spaces, Figure 1 displays the average similarity values on the SemEval cross-lingual datasets (the largest among all benchmarks) of each model. As expected, the figure clearly shows how our model consistently brings the words from both languages closer on all language pairs. Furthermore, this movement is performed smoothly across all pairs, i.e., our model does not make large changes to specific words but rather small changes overall. This can be verified by inspecting the standard deviation of the difference in similarity after applying our transformation. These standard deviation scores range from 0.031 (English-Spanish for VecMap) to 0.039 (English-Italian for MUSE), which are relatively small given that the cosine similarity scale ranges from -1 to 1.
As a complement of this analysis we show some qualitative results which give us further insights on the transformations of the vector space after our average approximation. In particular, we analyze the reasons behind the higher quality displayed by our bilingual embeddings in monolingual settings. While VecMap and MUSE do not transform the initial monolingual spaces, our model transforms both spaces simultaneously. In this analysis we focus on the source language of our experiments (i.e., English). We found interesting patterns which are learned by our model and help understand these monolingual gains. For example, a recurring pattern is that words in English which are translated to the same word, or to semantically close words, in the target language end up closer together after our transformation. For example, in the case of English-Spanish the following pairs were among the pairs whose similarity increased the most by applying our transformation: cellphone-telephone, movie-film, book-manuscript or rhythm-cadence, which are either translated to the same word in Spanish (i.e., teléfono and película in the first two cases) or are already very close in the Spanish space. More generally, we found that word pairs which move together the most tend to be semantically very similar and belong to the same domain, e.g., car-bicycle, opera-cinema, or snow-ice.
Modeling hypernymy is a crucial task in NLP, with direct applications in diverse areas such as semantic search BIBREF43 , BIBREF44 , question answering BIBREF45 , BIBREF46 or textual entailment BIBREF47 . Hypernyms, in addition, are the backbone of lexical ontologies BIBREF48 , which are in turn useful for organizing, navigating and retrieving online content BIBREF49 . Thus, we propose to evaluate the contribution of cross-lingual embeddings towards the task of hypernym discovery, i.e., given an input word (e.g., cat), retrieve or discover its most likely (set of) valid hypernyms (e.g., animal, mammal, feline, and so on). Intuitively, by leveraging a bilingual vector space condensing the semantics of two languages, one of them being English, the need for large amounts of training data in the target language may be reduced.
We follow EspinosaEMNLP2016 and learn a (cross-lingual) linear transformation matrix between the hyponym and hypernym spaces, which is afterwards used to predict the most likely (set of) hypernyms, given an unseen hyponym. Training and evaluation data come from the SemEval 2018 Shared Task on Hypernym Discovery BIBREF50 . Note that current state-of-the-art systems aimed at modeling hypernymy BIBREF51 , BIBREF52 combine large amounts of annotated data along with language-specific rules and cue phrases such as Hearst Patterns BIBREF53 , both of which are generally scarcely (if at all) available for languages other than English. Therefore, we report experiments with training data only from English (11,779 hyponym-hypernym pairs), and “enriched” models informed with relatively few training pairs (500, 1k and 2k) from the target languages. Evaluation is conducted with the same metrics as in the original SemEval task, i.e., Mean Reciprocal Rank (MRR), Mean Average Precision (MAP) and Precision at 5 (P@5). These measures explain a model's behavior from complementary prisms, namely how often at least one valid hypernym was highly ranked (MRR), and in cases where there is more than one correct hypernym, to what extent they were all correctly retrieved (MAP and P@5). Finally, as in the previous experiments, we report comparative results between our proposed models and the two competing baselines (VecMap and MUSE). As an additional informative baseline, we include the highest scoring unsupervised system at the SemEval task for both Spanish and Italian (BestUns), which is based on the distributional models described in shwartz2017hypernymy.
The results listed in Table 4 indicate several trends. First and foremost, in terms of model-wise comparisons, we observe that our proposed alterations of both VecMap and MUSE improve their quality in a consistent manner, across most metrics and data configurations. In Italian our proposed model shows an improvement across all configurations. However, in Spanish VecMap emerges as a highly competitive baseline, with our model only showing an improved performance when training data in this language abounds (in this specific case there is an increase from 17.2 to 19.5 points in the MRR metric). This suggests that the fact that the monolingual spaces are closer in our model is clearly beneficial when hybrid training data is given as input, opening up avenues for future work on weakly-supervised learning. Concerning the other baseline, MUSE, the contribution of our proposed model is consistent for both languages, again becoming more apparent in the Italian split and in a fully cross-lingual setting, where the improvement in MRR is almost 3 points (from 10.6 to 13.3). Finally, it is noteworthy that even in the setting where no training data from the target language is leveraged, all the systems based on cross-lingual embeddings outperform the best unsupervised baseline, which is a very encouraging result with regards to solving tasks for languages on which training data is not easily accessible or not directly available.
A manual exploration of the results obtained in cross-lingual hypernym discovery reveals a systematic pattern when comparing, for example, VecMap and our model. It was shown in Table 4 that the performance of our model gradually increased alongside the size of the training data in the target language until surpassing VecMap in the most informed configuration (i.e., EN+2k). Specifically, our model seems to show a higher presence of generic words in the output hypernyms, which may be explained by these being closer in the space. In fact, out of 1000 candidate hyponyms, our model correctly finds person 143 times, as compared to the 111 of VecMap, and this systematically occurs with generic types such as citizen or transport. Let us mention, however, that the considered baselines perform remarkably well in some cases. For example, the English-only VecMap configuration (EN), unlike ours, correctly discovered the following hypernyms for Francesc Macià (a Spanish politician and soldier): politician, ruler, leader and person. These were missing from the prediction of our model in all configurations until the most informed one (EN+2k).
Conclusions and Future Work
We have shown how to refine bilingual word embeddings by applying a simple transformation which moves cross-lingual synonyms closer towards their average representation. Before applying this strategy, we start by aligning the monolingual embeddings of the two languages of interest. For this initial alignment, we have considered two state-of-the-art methods from the literature, namely VecMap BIBREF11 and MUSE BIBREF12 , which also served as our baselines. Our approach is motivated by the fact that these alignment methods do not change the structure of the individual monolingual spaces. However, the internal structure of embeddings is, at least to some extent, language-specific, and is moreover affected by biases of the corpus from which they are trained, meaning that after the initial alignment significant gaps remain between the representations of cross-lingual synonyms. We tested our approach on a wide array of datasets from different tasks (i.e., bilingual dictionary induction, word similarity and cross-lingual hypernym discovery) with state-of-the-art results.
This paper opens up several promising avenues for future work. First, even though both languages are currently being treated symmetrically, the initial monolingual embedding of one of the languages may be more reliable than that of the other. In such cases, it may be of interest to replace the vectors $\vec{\mu }_ {w,w^{\prime }}$ by a weighted average of the monolingual word vectors. Second, while we have only considered bilingual scenarios in this paper, our approach can naturally be applied to scenarios involving more languages. In this case, we would first choose a single target language, and obtain alignments between all the other languages and this target language. To apply our model, we can then simply learn mappings to predict averaged word vectors across all languages. Finally, it would also be interesting to use the obtained embeddings in downstream applications such as language identification or cross-lingual sentiment analysis, and extend our analysis to other languages, with a particular focus on morphologically-rich languages (after seeing our success with Finnish), for which the bilingual induction task has proved more challenging for standard cross-lingual embedding models BIBREF9 .
Acknowledgments
Yerai Doval is funded by the Spanish Ministry of Economy, Industry and Competitiveness (MINECO) through project FFI2014-51978-C2-2-R, and by the Spanish State Secretariat for Research, Development and Innovation (which belongs to MINECO) and the European Social Fund (ESF) under a FPI fellowship (BES-2015-073768) associated to project FFI2014-51978-C2-1-R. Jose Camacho-Collados, Luis Espinosa-Anke and Steven Schockaert have been supported by ERC Starting Grant 637277. | bilingual dictionary induction, monolingual and cross-lingual word similarity, and cross-lingual hypernym discovery |
a979749e59e6e300a453d8a8b1627f97101799de | a979749e59e6e300a453d8a8b1627f97101799de_0 | Q: Why does the model improve in monolingual spaces as well?
Text: Introduction
Word embeddings are one of the most widely used resources in NLP, as they have proven to be of enormous importance for modeling linguistic phenomena in both supervised and unsupervised settings. In particular, the representation of words in cross-lingual vector spaces (henceforth, cross-lingual word embeddings) is quickly gaining in popularity. One of the main reasons is that they play a crucial role in transferring knowledge from one language to another, specifically in downstream tasks such as information retrieval BIBREF0 , entity linking BIBREF1 and text classification BIBREF2 , while at the same time providing improvements in multilingual NLP problems such as machine translation BIBREF3 .
There exist different approaches for obtaining these cross-lingual embeddings. One of the most successful methodological directions, which constitutes the main focus of this paper, attempts to learn bilingual embeddings via a two-step process: first, word embeddings are trained on monolingual corpora and then the resulting monolingual spaces are aligned by taking advantage of bilingual dictionaries BIBREF4 , BIBREF5 , BIBREF6 .
These alignments are generally modeled as linear transformations, which are constrained such that the structure of the initial monolingual spaces is left unchanged. This can be achieved by imposing an orthogonality constraint on the linear transformation BIBREF6 , BIBREF7 . Our hypothesis in this paper is that such approaches can be further improved, as they rely on the assumption that the internal structure of the two monolingual spaces is identical. In reality, however, this structure is influenced by language-specific phenomena, e.g., the fact that Spanish distinguishes between masculine and feminine nouns BIBREF8 as well as the specific biases of the different corpora from which the monolingual spaces were learned. Because of this, monolingual embedding spaces are not isomorphic BIBREF9 , BIBREF10 . On the other hand, simply dropping the orthogonality constraints leads to overfitting, and is thus not effective in practice.
The solution we propose is to start with existing state-of-the-art alignment models BIBREF11 , BIBREF12 , and to apply a further transformation to the resulting initial alignment. For each word $w$ with translation $w^{\prime }$ , this additional transformation aims to map the vector representations of both $w$ and $w^{\prime }$ onto their average, thereby creating a cross-lingual vector space which intuitively corresponds to the average of the two aligned monolingual vector spaces. Similar to the initial alignment, this mapping is learned from a small bilingual lexicon.
Our experimental results show that the proposed additional transformation does not only benefit cross-lingual evaluation tasks, but, perhaps surprisingly, also monolingual ones. In particular, we perform an extensive set of experiments on standard benchmarks for bilingual dictionary induction and monolingual and cross-lingual word similarity, as well as on an extrinsic task: cross-lingual hypernym discovery.
Code and pre-trained embeddings to reproduce our experiments and to apply our model to any given cross-lingual embeddings are available at https://github.com/yeraidm/meemi.
Related Work
Bilingual word embeddings have been extensively studied in the literature in recent years. Their nature varies with respect to the supervision signals used for training BIBREF13 , BIBREF14 . Some common signals to learn bilingual embeddings come from parallel BIBREF15 , BIBREF16 , BIBREF17 or comparable corpora BIBREF18 , BIBREF19 , BIBREF20 , or lexical resources such as WordNet, ConceptNet or BabelNet BIBREF21 , BIBREF22 , BIBREF23 . However, these sources of supervision may be scarce, limited to certain domains or may not be directly available for certain language pairs.
Another branch of research exploits pre-trained monolingual embeddings with weak signals such as bilingual lexicons for learning bilingual embeddings BIBREF4 , BIBREF5 , BIBREF24 , BIBREF7 . mikolov2013exploiting was one of the first attempts into this line of research, applying a linear transformation in order to map the embeddings from one monolingual space into another. They also noted that more sophisticated approaches, such as using multilayer perceptrons, do not improve with respect to their linear counterparts. xing2015normalized built upon this work by normalizing word embeddings during training and adding an orthogonality constraint. In a complementary direction, faruqui2014improving put forward a technique based on canonical correlation analysis to obtain linear mappings for both monolingual embedding spaces into a new shared space. artetxe2016learning proposed a similar linear mapping to mikolov2013exploiting, generalizing it and providing theoretical justifications which also served to reinterpret the methods of faruqui2014improving and xing2015normalized. smith2017offline further showed how orthogonality was required to improve the consistency of bilingual mappings, making them more robust to noise. Finally, a more complete generalization providing further insights on the linear transformations used in all these models can be found in artetxe2018generalizing.
These approaches generally require large bilingual lexicons to effectively learn multilingual embeddings BIBREF11 . Recently, however, alternatives which only need very small dictionaries, or even none at all, have been proposed to learn high-quality embeddings via linear mappings BIBREF11 , BIBREF12 . More details on the specifics of these two approaches can be found in Section "Aligning monolingual spaces" . These models have in turn paved the way for the development of machine translation systems which do not require any parallel corpora BIBREF25 , BIBREF26 . Moreover, the fact that such approaches only need monolingual embeddings, instead of parallel or comparable corpora, makes them easily adaptable to different domains (e.g., social media or web corpora).
In this paper we build upon these state-of-the-art approaches by applying an additional transformation, which aims to map each word and its translation onto the average of their vector representations. This strategy bears some resemblance with the idea of learning meta-embeddings BIBREF27 . Meta-embeddings are vector space representations which aggregate several pre-trained word embeddings from a given language (e.g., trained using different corpora and/or different word embedding models). Empirically it was found that such meta-embeddings can often outperform the individual word embeddings from which they were obtained. In particular, it was recently argued that word vector averaging can be a highly effective approach for learning such meta-embeddings BIBREF28 . The main difference between such approaches and our work is that because we rely on a small dictionary, we cannot simply average word vectors, since for most words we do not know the corresponding translation. Instead, we train a regression model to predict this average word vector from the vector representation of the given word only, i.e., without using the vector representation of its translation.
Methodology
Our approach for improving cross-lingual embeddings consists of three main steps, where the first two steps are the same as in existing methods. In particular, given two monolingual corpora, a word vector space is first learned independently for each language. This can be achieved with common word embedding models, e.g., Word2vec BIBREF29 , GloVe BIBREF30 or FastText BIBREF31 . Second, a linear alignment strategy is used to map the monolingual embeddings to a common bilingual vector space (Section "Aligning monolingual spaces" ). Third, a final transformation is applied on the aligned embeddings so the word vectors from both languages are refined and further integrated with each other (Section "Conclusions and Future Work" ). This third step is the main contribution of our paper.
Aligning monolingual spaces
Once the monolingual word embeddings have been obtained, a linear transformation is applied in order to integrate them into the same vector space. This linear transformation is generally carried out using a supervision signal, typically in the form of a bilingual dictionary. In the following we explain two state-of-the-art models performing this linear transformation.
VecMap uses an orthogonal transformation over normalized word embeddings. An iterative two-step procedure is also implemented in order to avoid the need of starting with a large seed dictionary (e.g., in the original paper it was tested with a very small bilingual dictionary of just 25 pairs). In this procedure, first, the linear mapping is estimated using a small bilingual dictionary, and then, this dictionary is augmented by applying the learned transformation to new words from the source language. Lastly, the process is repeated until some convergence criterion is met.
In this case, the transformation matrix is learned through an iterative Procrustes alignment BIBREF32 . The anchor points needed for this alignment can be obtained either through a supplied bilingual dictionary or through an unsupervised model. This unsupervised model is trained using adversarial learning to obtain an initial alignment of the two monolingual spaces, which is then refined by the Procrustes alignment using the most frequent words as anchor points. A new distance metric for the embedding space, referred to as cross-domain similarity local scaling, is also introduced. This metric, which takes into account the nearest neighbors of both source and target words, was shown to better handle high-density regions of the space, thus alleviating the hubness problem of word embedding models BIBREF33 , BIBREF34 , which arises when a few points (known as hubs) become the nearest neighbors of many other points in the embedding space.
Meeting in the middle
After the initial alignment of the monolingual word embeddings, our proposed method leverages an additional linear model to refine the resulting bilingual word embeddings. This is because the methods presented in the previous section apply constraints to ensure that the structure of the monolingual embeddings is largely preserved. As already mentioned in the introduction, conceptually this may not be optimal, as embeddings for different languages and trained from different corpora can be expected to be structured somewhat differently. Empirically, as we will see in the evaluation, after applying methods such as VecMap and MUSE there still tend to be significant gaps between the vector representations of words and their translations. Our method directly attempts to reduce these gaps by moving each word vector towards the middle point between its current representation and the representation of its translation. In this way, by bringing the two monolingual fragments of the space closer to each other, we can expect to see an improved performance on cross-lingual evaluation tasks such as bilingual dictionary induction. Importantly, the internal structure of the two monolingual fragments themselves is also affected by this step. By averaging between the representations obtained from different languages, we hypothesize that the impact of language-specific phenomena and corpus specific biases will be reduced, thereby ending up with more “neutral” monolingual embeddings.
In the following, we detail our methodological approach. First, we leverage the same bilingual dictionary that was used to obtain the initial alignment (Section "Aligning monolingual spaces" ). Specifically, let $D=\lbrace (w,w^{\prime })\rbrace $ be the given bilingual dictionary, where $w \in V$ and $w^{\prime } \in V^{\prime }$ , with $V$ and $V^{\prime }$ representing the vocabulary of the first and second language, respectively. For pairs $(w,w^{\prime }) \in D$ , we can simply compute the corresponding average vector $\vec{\mu }_{w,w^{\prime }}=\frac{\vec{v}_w+\vec{v}_{w^{\prime }}}{2}$ . Then, using the pairs in $D$ as training data, we learn a linear mapping $X$ such that $X \vec{v}_w \approx \vec{\mu }_{w,w^{\prime }}$ for all $w \in V$0 . This mapping $w \in V$1 can then be used to predict the averages for words outside the given dictionary. To find the mapping $w \in V$2 , we solve the following least squares linear regression problem:
$$E=\sum _{(w,w^{\prime }) \in D} \Vert X\vec{w}-\vec{\mu }_ {w,w^{\prime }}\Vert ^2$$ (Eq. 6)
Similarly, for the other language, we separately learn a mapping $X^{\prime }$ such that $X^{\prime } \vec{v}_{w^{\prime }} \approx \vec{\mu }_{w,w^{\prime }}$ .
It is worth pointing out that we experimented with several variants of this linear regression formulation. For example, we also tried using a multilayer perceptron to learn non-linear mappings, and we experimented with several regularization terms to penalize mappings that deviate too much from the identity mapping. None of these variants, however, were found to improve on the much simpler formulation in ( 6 ), which can be solved exactly and efficiently. Furthermore, one may wonder whether the initial alignment is actually needed, since e.g., coates2018frustratingly obtained high-quality meta-embeddings without such an alignment set. However, when applying our approach directly to the initial monolingual non-aligned embedding spaces, we obtained results which were competitive but slightly below the two considered alignment strategies.
Evaluation
We test our bilingual embedding refinement approach on both intrinsic and extrinsic tasks. In Section "Cross-lingual embeddings training" we describe the common training setup for all experiments and language pairs. The languages we considered are English, Spanish, Italian, German and Finnish. Throughout all the experiments we use publicly available resources in order to make comparisons and reproducibility of our experiments easier.
Cross-lingual embeddings training
Corpora. In our experiments we make use of web-extracted corpora. For English we use the 3B-word UMBC WebBase Corpus BIBREF35 , while we chose the Spanish Billion Words Corpus BIBREF36 for Spanish. For Italian and German, we use the itWaC and sdeWaC corpora from the WaCky project BIBREF37 , containing 2 and 0.8 billion words, respectively. Lastly, for Finnish, we use the Common Crawl monolingual corpus from the Machine Translation of News Shared Task 2016, composed of 2.8B words. All corpora are tokenized and lowercased.
Monolingual embeddings. The monolingual word embeddings are trained with the Skipgram model from FastText BIBREF31 on the corpora described above. The dimensionality of the vectors was set to 300, with the default FastText hyperparameters.
Bilingual dictionaries. We use the bilingual dictionaries packaged together by artetxe-labaka-agirre:2017:Long, each one conformed by 5000 word translations. They are used both for the initial bilingual mappings and then again for our linear transformation.
Initial mapping. Following previous works, for the purpose of obtaining the initial alignment, English is considered as source language and the remaining languages are used as target. We make use of the open-source implementations of VecMap BIBREF11 and MUSE BIBREF12 , which constitute strong baselines for our experiments (cf. Section "Aligning monolingual spaces" ). Both of them were used with the recommended parameters and in their supervised setting, using the aforementioned bilingual dictionaries.
Meeting in the Middle. Then, once the initial cross-lingual embeddings are trained, and as explained in Section "Conclusions and Future Work" , we obtain our linear transformation by using the exact solution to the least squares linear regression problem. To this end, we use the same bilingual dictionaries as in the previous step. Henceforth, we will refer to our transformed models as VecMap $_\mu $ and MUSE $_\mu $ , depending on the initial mapping.
Experiments
We test our cross-lingual word embeddings in two intrinsic tasks, i.e., bilingual dictionary induction (Section UID14 ) and word similarity (Section UID20 ), and an extrinsic task, i.e., cross-lingual hypernym discovery (Section UID31 ).
The dictionary induction task consists in automatically generating a bilingual dictionary from a source to a target language, using as input a list of words in the source language.
For this task, and following previous works, we use the English-Italian test set released by dinu2015improving and those released by artetxe-labaka-agirre:2017:Long for the remaining language pairs. These test sets have no overlap with respect to the training and development sets, and contain around 1900 entries each. Given an input word from the source language, word translations are retrieved through a nearest-neighbor search of words in the target language, using cosine distance. Note that this gives us a ranked list of candidates for each word from the source language. Accordingly, the performance of the embeddings is evaluated with the precision at $k$ ( $P@k$ ) metric, which evaluates for what percentage of test pairs, the correct answer is among the $k$ highest ranked candidates.
As can be seen in Table 1 , our refinement method consistently improves over the baselines (i.e., VecMap and MUSE) on all language pairs and metrics. The higher scores indicate that the two monolingual embedding spaces become more tightly integrated because of our additional transformation. It is worth highlighting here the case of English-Finnish, where the gains obtained in $P@5$ and $P@10$ are considerable. This might indicate that our approach is especially useful for morphologically richer languages such as Finnish, where the limitations of the previous bilingual mappings are most apparent.
When analyzing the source of errors in $P@1$ , we came to similar conclusions as artetxe-labaka-agirre:2017:Long. Several source words are translated to words that are closely related to the one in the gold reference in the target language; e.g., for the English word essentially we obtain básicamente (basically) instead of fundamentalmente (fundamentally) in Spanish, both of them closely related, or the closest neighbor for dirt being mugre (dirt) instead of suciedad (dirt), which in fact was among the five closest neighbors. We can also find multiple examples of the higher performance of our models compared to the baselines. For instance, in the English-Spanish cross-lingual models, after the initial alignment, we can find that seconds has minutos (minutes) as nearest neighbour, but after applying our additional transformation, seconds becomes closest to segundos (seconds). Similarly, paint initially has tintado (tinted) as the closest Spanish word, and then pintura (paint).
We perform experiments on both monolingual and cross-lingual word similarity. In monolingual similarity, models are tested in their ability to determine the similarity between two words in the same language, whereas in cross-lingual similarity the words belong to different languages. While in the monolingual setting the main objective is to test the quality of the monolingual subsets of the bilingual vector space, the cross-lingual setting constitutes a straightforward benchmark to test the quality of bilingual embeddings.
For monolingual word similarity we use the English SimLex-999 BIBREF38 , and the language-specific versions of SemEval-17 BIBREF39 , WordSim-353 BIBREF40 , and RG-65 BIBREF41 . The corresponding cross-lingual datasets from SemEval-18, WordSim-353 and RG-65 were considered for the cross-lingual word similarity evaluation. Cosine similarity is again used as comparison measure.
Tables 2 and 3 show the monolingual and cross-lingual word similarity results, respectively. For both the monolingual and cross-lingual settings, we can notice that our models generally outperform the corresponding baselines. Moreover, in cases where no improvement is obtained, the differences tend to be minimal, with the exception of RG-65, but this is a very small test set for which larger variations can thus be expected. In contrast, there are a few cases where substantial gains were obtained by using our model. This is most notable for English WordSim and SimLex in the monolingual setting.
In order to further understand the movements of the space with respect to the original VecMap and MUSE spaces, Figure 1 displays the average similarity values on the SemEval cross-lingual datasets (the largest among all benchmarks) of each model. As expected, the figure clearly shows how our model consistently brings the words from both languages closer on all language pairs. Furthermore, this movement is performed smoothly across all pairs, i.e., our model does not make large changes to specific words but rather small changes overall. This can be verified by inspecting the standard deviation of the difference in similarity after applying our transformation. These standard deviation scores range from 0.031 (English-Spanish for VecMap) to 0.039 (English-Italian for MUSE), which are relatively small given that the cosine similarity scale ranges from -1 to 1.
As a complement of this analysis we show some qualitative results which give us further insights on the transformations of the vector space after our average approximation. In particular, we analyze the reasons behind the higher quality displayed by our bilingual embeddings in monolingual settings. While VecMap and MUSE do not transform the initial monolingual spaces, our model transforms both spaces simultaneously. In this analysis we focus on the source language of our experiments (i.e., English). We found interesting patterns which are learned by our model and help understand these monolingual gains. For example, a recurring pattern is that words in English which are translated to the same word, or to semantically close words, in the target language end up closer together after our transformation. For example, in the case of English-Spanish the following pairs were among the pairs whose similarity increased the most by applying our transformation: cellphone-telephone, movie-film, book-manuscript or rhythm-cadence, which are either translated to the same word in Spanish (i.e., teléfono and película in the first two cases) or are already very close in the Spanish space. More generally, we found that word pairs which move together the most tend to be semantically very similar and belong to the same domain, e.g., car-bicycle, opera-cinema, or snow-ice.
Modeling hypernymy is a crucial task in NLP, with direct applications in diverse areas such as semantic search BIBREF43 , BIBREF44 , question answering BIBREF45 , BIBREF46 or textual entailment BIBREF47 . Hypernyms, in addition, are the backbone of lexical ontologies BIBREF48 , which are in turn useful for organizing, navigating and retrieving online content BIBREF49 . Thus, we propose to evaluate the contribution of cross-lingual embeddings towards the task of hypernym discovery, i.e., given an input word (e.g., cat), retrieve or discover its most likely (set of) valid hypernyms (e.g., animal, mammal, feline, and so on). Intuitively, by leveraging a bilingual vector space condensing the semantics of two languages, one of them being English, the need for large amounts of training data in the target language may be reduced.
We follow EspinosaEMNLP2016 and learn a (cross-lingual) linear transformation matrix between the hyponym and hypernym spaces, which is afterwards used to predict the most likely (set of) hypernyms, given an unseen hyponym. Training and evaluation data come from the SemEval 2018 Shared Task on Hypernym Discovery BIBREF50 . Note that current state-of-the-art systems aimed at modeling hypernymy BIBREF51 , BIBREF52 combine large amounts of annotated data along with language-specific rules and cue phrases such as Hearst Patterns BIBREF53 , both of which are generally scarcely (if at all) available for languages other than English. Therefore, we report experiments with training data only from English (11,779 hyponym-hypernym pairs), and “enriched” models informed with relatively few training pairs (500, 1k and 2k) from the target languages. Evaluation is conducted with the same metrics as in the original SemEval task, i.e., Mean Reciprocal Rank (MRR), Mean Average Precision (MAP) and Precision at 5 (P@5). These measures explain a model's behavior from complementary prisms, namely how often at least one valid hypernym was highly ranked (MRR), and in cases where there is more than one correct hypernym, to what extent they were all correctly retrieved (MAP and P@5). Finally, as in the previous experiments, we report comparative results between our proposed models and the two competing baselines (VecMap and MUSE). As an additional informative baseline, we include the highest scoring unsupervised system at the SemEval task for both Spanish and Italian (BestUns), which is based on the distributional models described in shwartz2017hypernymy.
The results listed in Table 4 indicate several trends. First and foremost, in terms of model-wise comparisons, we observe that our proposed alterations of both VecMap and MUSE improve their quality in a consistent manner, across most metrics and data configurations. In Italian our proposed model shows an improvement across all configurations. However, in Spanish VecMap emerges as a highly competitive baseline, with our model only showing an improved performance when training data in this language abounds (in this specific case there is an increase from 17.2 to 19.5 points in the MRR metric). This suggests that the fact that the monolingual spaces are closer in our model is clearly beneficial when hybrid training data is given as input, opening up avenues for future work on weakly-supervised learning. Concerning the other baseline, MUSE, the contribution of our proposed model is consistent for both languages, again becoming more apparent in the Italian split and in a fully cross-lingual setting, where the improvement in MRR is almost 3 points (from 10.6 to 13.3). Finally, it is noteworthy that even in the setting where no training data from the target language is leveraged, all the systems based on cross-lingual embeddings outperform the best unsupervised baseline, which is a very encouraging result with regards to solving tasks for languages on which training data is not easily accessible or not directly available.
A manual exploration of the results obtained in cross-lingual hypernym discovery reveals a systematic pattern when comparing, for example, VecMap and our model. It was shown in Table 4 that the performance of our model gradually increased alongside the size of the training data in the target language until surpassing VecMap in the most informed configuration (i.e., EN+2k). Specifically, our model seems to show a higher presence of generic words in the output hypernyms, which may be explained by these being closer in the space. In fact, out of 1000 candidate hyponyms, our model correctly finds person 143 times, as compared to the 111 of VecMap, and this systematically occurs with generic types such as citizen or transport. Let us mention, however, that the considered baselines perform remarkably well in some cases. For example, the English-only VecMap configuration (EN), unlike ours, correctly discovered the following hypernyms for Francesc Macià (a Spanish politician and soldier): politician, ruler, leader and person. These were missing from the prediction of our model in all configurations until the most informed one (EN+2k).
Conclusions and Future Work
We have shown how to refine bilingual word embeddings by applying a simple transformation which moves cross-lingual synonyms closer towards their average representation. Before applying this strategy, we start by aligning the monolingual embeddings of the two languages of interest. For this initial alignment, we have considered two state-of-the-art methods from the literature, namely VecMap BIBREF11 and MUSE BIBREF12 , which also served as our baselines. Our approach is motivated by the fact that these alignment methods do not change the structure of the individual monolingual spaces. However, the internal structure of embeddings is, at least to some extent, language-specific, and is moreover affected by biases of the corpus from which they are trained, meaning that after the initial alignment significant gaps remain between the representations of cross-lingual synonyms. We tested our approach on a wide array of datasets from different tasks (i.e., bilingual dictionary induction, word similarity and cross-lingual hypernym discovery) with state-of-the-art results.
This paper opens up several promising avenues for future work. First, even though both languages are currently being treated symmetrically, the initial monolingual embedding of one of the languages may be more reliable than that of the other. In such cases, it may be of interest to replace the vectors $\vec{\mu }_ {w,w^{\prime }}$ by a weighted average of the monolingual word vectors. Second, while we have only considered bilingual scenarios in this paper, our approach can naturally be applied to scenarios involving more languages. In this case, we would first choose a single target language, and obtain alignments between all the other languages and this target language. To apply our model, we can then simply learn mappings to predict averaged word vectors across all languages. Finally, it would also be interesting to use the obtained embeddings in downstream applications such as language identification or cross-lingual sentiment analysis, and extend our analysis to other languages, with a particular focus on morphologically-rich languages (after seeing our success with Finnish), for which the bilingual induction task has proved more challenging for standard cross-lingual embedding models BIBREF9 .
Acknowledgments
Yerai Doval is funded by the Spanish Ministry of Economy, Industry and Competitiveness (MINECO) through project FFI2014-51978-C2-2-R, and by the Spanish State Secretariat for Research, Development and Innovation (which belongs to MINECO) and the European Social Fund (ESF) under a FPI fellowship (BES-2015-073768) associated to project FFI2014-51978-C2-1-R. Jose Camacho-Collados, Luis Espinosa-Anke and Steven Schockaert have been supported by ERC Starting Grant 637277. | because word pair similarity increases if the two words translate to similar parts of the cross-lingual embedding space |
b10632eaa0ca48f86522d8ec38b1d702cb0b8c01 | b10632eaa0ca48f86522d8ec38b1d702cb0b8c01_0 | Q: What are the categories being extracted?
Text: Introduction and Background
Electronic Health Records (EHRs) are organized collections of information about individual patients. They are designed such that they can be shared across different settings for providing health care services. The Institute of Medicine committee on improving the patient record has recognized the importance of using EHRs to inform decision support systems and support data-driven quality measures BIBREF0 . One of the biggest challenges in achieving this goal is the difficulty of extracting information from large quantities of EHR data stored as unstructured free text. Clinicians often make use of narratives and first-person stories to document interactions, findings and analyses in patient cases BIBREF1 . As a result, finding information from these volumes of health care records typically requires the use of NLP techniques to automate the extraction process.
There has been a long history of research in the application of NLP methods in the clinical domain BIBREF2 . Researchers have developed models for automatically detecting outbreak of diseases such as influenza BIBREF3 , identifying adverse drug reactions BIBREF4 , BIBREF5 , BIBREF6 , and measuring the quality of colonoscopy procedures BIBREF7 , among others. Due to the complexity of clinical text, the accuracy of these techniques may vary BIBREF8 . Current tools also lack provision for end-users to inspect NLP outcomes and make corrections that might improve these results. Due to these factors, Chapman et. al. BIBREF2 have identified “lack of user-centered development" as one of the barriers in NLP adoption in the clinical domain. There is a need to focus on development of NLP systems that are not only generalizable for use in different tasks but are also usable without excessive dependence on NLP developers. In this paper, we have explored the design of user-interfaces for use by end users (clinicians and clinical researchers) to support the review and annotation of clinical text using natural language processing.
We have developed an interactive web-based tool that facilitates both the review of binary variables extracted from clinical records, and the provision of feedback that can be used to improve the accuracy of NLP models. Our goal is to close the natural language processing gap by providing clinical researchers with highly-usable tools that will facilitate the process of reviewing NLP output, identifying errors in model prediction, and providing feedback that can be used to retrain or extend models to make them more effective.
2em
Related Work
During the process of developing our interactive text analysis tool for clinical domain, we studied relevant work from multiple research areas spanning across different domains, such as Visualization, Interactive Machine Learning and Interface Design. We have built upon the following work in these areas for the design of our tool.
2em
Visualization and Sensemaking
Visualization tools such as WordTree BIBREF9 and Tiara BIBREF10 help in providing a visual summary of large amount of text data. While Tiara focuses on content evolution of each topic over time, WordTree provides a keyword in context method of exploring the text. Other tools such as Jigsaw BIBREF11 help users interpret document collections by visualizing documents in multiple graph, cluster and list views. Our task in reviewing clinical documents is somewhat different, in that our goals are to understand common textual patterns and to use those patterns to improve NLP models. We have adapted elements of these views - in particular, WordTree's phrase view and Jigsaw's document view to support our goals. The purpose of these visualizations would be to provide both detailed document-level views and also dataset-level overviews.
2em
Interactive Machine Learning
There have been many efforts to develop user-centric tools for machine learning and NLP making it easier for the end users to build models. D'Avolio et. al. BIBREF12 have described a prototype that combines several existing tools such as Knowtator BIBREF13 for creating text annotations, and cTAKES BIBREF14 for deriving NLP features, within a common user interface that can be used to configure the machine learning algorithms and export their results. Our present work complements this effort, focusing instead on facilitating expert review of NLP results and provision of feedback regarding the accuracy and completeness of details extracted from NLP data.
Other efforts have taken this idea even further to build interactive machine learning systems that learn iteratively from their end-users. Sometimes referred to as “human-in-the-loop” methods, these techniques involve a learning system whose output is used by the end-user to further inform the system about the learning task. This forms a closed loop that can be used to build continuously improving models of prediction. Some examples include applications in interactive document clustering BIBREF15 , document retrieval BIBREF16 , image segmentation BIBREF17 , bug triaging BIBREF18 and even music composition BIBREF19 . These successes suggest that it may be promising to use feedback to improve machine learning models in the clinical domain.
2em
Design Requirements
We assume that the users of our tool are domain experts who are familiar with the contents of the documents being reviewed, but not with machine learning. Our approach focuses on designing interaction methods and novel data visualizations for the user to interact with and correct the learning models. Further, while most of the focus in previous work has been towards developing usable interaction methods with the learning algorithms, more often in real world applications, we find that obtaining reliable labels for the training examples is very difficult, costly or time-consuming. In domains such as medicine, we require the help of skilled domain experts. Labeled data are important to support training automated systems; yet, large amounts of training data do not exist for new use cases or for applications that may arise in the future. It is therefore of great practical interest to develop methods for obtaining good quality labels efficiently. Such methods are even more in need for NLP applications because it is time consuming for annotators to obtain the contextual information from the text before labeling. Lastly, we need to design techniques for the users to review the output of the NLP models. They should allow the users to find errors in predictions and make changes to build revised models. This would form a closed loop that would allow the users to iteratively create more accurate models that can be useful in their analysis. These requirements are summarized as follows:
2em
Interface Design
To demonstrate our tool, we have used an example dataset of colonoscopy reports by building on work done by Harkema et. al. BIBREF7 . They have described an NLP system to extract values against a set of boolean variables from these reports. We have included a subset of 14 of these variables for the demo. Each patient record in the example dataset can include multiple linked reports from endoscopy and pathology. We have considered such reports together as a single document for learning and making predictions.
Figure 1 shows a screenshot of our web-based tool. We have also uploaded a demo video of the tool at http://vimeo.com/trivedigaurav/emr-demo. In the following sections, we describe the individual components of the tool's user-interface, relating to the three requirements discussed above.
2em
Review
An interactive machine learning cycle begins with the review step where the output from the learning model is shown to the user. Initial models can be trained on a few hand-annotated training examples. We have designed the following views in our tool to help the user inspect the prediction results.
2em
The grid-view is a table with columns showing the 14 variables and rows representing the individual documents. Each cell in the table shows the predicted value – true or false – corresponding to the particular document-variable pair. This table is scrollable to accommodate all the documents in the dataset and extends beyond what is visible in Figure 1 (a). We also have some cells with a question mark (?), where the model is unsure about the classification. This might happen for one of two reasons: either the classification algorithm does not identify a clear answer, or the learning system does not have sufficient examples in the training data to make any predictions as yet. Subsequent feedback may tilt the classification in either direction.
If the user hovers the mouse over a particular cell, a pop-up appears below it that shows the prediction probability, or how confident the learning system is in making that prediction. For example, the probability of a particular cell being true may be 75 percent. The grid also doubles up as a way to navigate through the documents. When a user clicks on a particular cell, the corresponding document-variable pair is activated in the all other views. The document view opens up the active document on the right-half of the screen. The highlighted cell in the grid indicates the currently active document-variable pair. Whenever the user clicks on a cell, we also mark it as visited to keep track of them. Visited cells in the grid are denoted by an asterisk symbol (*).
An overview bar at the top of each column displays the true-false distribution (skew) of each variable. Exact distribution percentages are shown when the user mouses over the variable name.
We have followed a uniform color scheme throughout the tool. Everything shaded blue represents a true value while the orange shades stand for false values. The colors were selected from a colorblind-safe palette. In the grid view, the cells with higher probability have a darker background color. For example, a light blue cell indicates a low probability about a true classification, and a darker blue for a higher probability.
2em
Below the grid, we have views showing statistics about the currently active document and the variable. We show a histogram with a distribution of the true, false and unknown values over all the documents in the grid for the activated variable. Again, to reveal the exact counts under each prediction class, a user may hover the mouse over chart. This display is similar to the overview bar above the grid but is more detailed and changes dynamically when the user uses the search box or the WordTree view to filter the document collection.
Our NLP pipeline uses a bag of words feature-set and a support vector machine (SVM) learning model for every variable, but it can be extended for use with different kinds of models and complement other existing tools as well. It works by identifying more informative features from a document (top terms) to make predictions. Informative terms are highlighted when present in the current document in the right half, with overall distributions presented on the left-hand side of the screen. Terms are color-coded to indicate their contribution towards assigning the value of true or false against a variable, using colors from the document-variable grid. A mouse over each top term will reveal the feature weights from the learning system. Note that the current implementation consists of only unigram features but the same idea can be extended to $n$ -grams as well.
2em
On the right hand side of the tool, we show the full-text of the reports. The linked documents from a patient record, such as endoscopy and pathology reports are listed on the top of this view with shortcuts to jump to any of them. The top terms, both true and false (as seen in the grid view), are highlighted in the document. The keyword lists document the last view can be used to navigate through the report as well. Clicking on a keyword from the list causes the document to view to scroll to and highlight the first appearance of the term in the current document. This can also be done from the top terms list for the variables. However, since the term list contains the aggregate of the terms from all the documents together, there is a chance that a particular term doesn't appear in the open document. When such a term is clicked the keyword will be animated with a brief jitter to indicate that it cannot be found. The highlighted top-terms in the document view follow the same color scheme for true and false indicators.
Clinical reports also contain boilerplate text, or portions that can be considered to be having no effect on the predictions. These include de-identification headers, footers, and report's template text. These portions are dimmed in gray to improve the readability of the reports.
2em
We have discussed views that provide detailed document-level visualization of the health records. But, we still need a visualization that could give a quick overview of the complete dataset. The WordTree BIBREF9 visualization offers a visual search tool for unstructured text that makes it easy to explore repetitive word phrases. The main advantage of using the wordtree is that it offers a complete data-set level visualization while retaining sentence level contextual information. A wordtree for a particular keyword consists of all the sentences in the dataset having that word or phrase. If one thinks of this keyword as the root of the tree, the branches represent the phrases that precede or follow that word. All the nodes of the tree are built recursively in this fashion. The font size of a particular node is decided according to the total proportion of sentences that have it as a common starting phrase.
We have made several improvements to the original wordtree design by Wattenberg and Viegas BIBREF9 . Their design restricts the root phrases to be present at the beginning or the end of a sentence. This allows the tree to grow only in one direction. We have used a modified design of the wordtree (as proposed in BIBREF20 ) to construct a bi-directional tree that can grow in both directions. A sentence reads from left to right with the root phrase in the middle of the tree. Ends of sentences are denoted by a period (.) node. This information is also conveyed in plain-text and numbers as a tool-tip on one corner of the WordTree view on mouse over. These gradients are dynamically updated according the variable selected by the user and also as the models change prediction upon retraining. The gradients provide an insight into how the machine learning model's prediction changes as different words or phrases are present or absent from the documents.
To start using the WordTree view, the user must enter a keyword or a phrase in the search bar. The tool creates a wordtree with the search query as the root after scanning all the sentences in the dataset. One can interact with the wordtree and navigate through its branches by clicking on individual nodes as shown in Figure UID12 . Doing this prunes the tree and drill down into the details by adding the clicked node to be a part of the root node along with the search term. Clicking on the same node again reverts the view to the previous state of the tree. The gray bar below the wordtree, shows the number documents and and the percentage of the dataset represented in the tree. The WordTree view also has a full-screen mode which hides the other views in the tool when required.
We have extended the wordtree to use color-coded gradients to encode class distribution information. Each word is painted in a gradient, with the extent of the blue/orange color indicating the percentage of active documents in the grid classified as true/false, using the previously-described color palette. The grid view is also linked to the wordtree: pruning the branches in the tree filters the set of documents to display in the grid to contain only those that are represented in the tree. The document ID list on the top right of the screen and the statistics views are similarly coordinated. If a user wishes to read more than what is available in the tree, they could just click on the corresponding cell in the grid to switch back to the document view and review the complete document. In the following section we will describe how the wordtree can be used for providing annotations as well.
2em
Feedback
Feedback from the user is used by the system to improve upon the machine learning models by providing labels for documents that were previously not part of the training set, or by correcting any misclassified documents. Useful feedback to the machine learning helps improve the prediction accuracy of the NLP models. A user can provide feedback by simply changing the prediction class of a document for a given variable. The learning system would then be able to use this as a training example to learn from its features. The marginal benefits may be greater if a user classifies a group of documents together instead of annotating them one by one. To classify a group of documents, we could search for a selected text span and label all the matching documents as belonging to a certain class. For this approach to work correctly, selected text spans must convey consistent meanings across different usage scenarios in the dataset, and feedback based on these selections should not imply any contradictory classifications.
Our prototype supports both multiple mechanisms for providing feedback and a review display that will alert the user to any potential inconsistencies associated with them. To provide feedback for a particular document-variable pair, the user can select either true or false on the yellow control bar above the document view (Figure 1 (f)). The currently active variable from the grid is pre-selected but the control also allows the user to quickly change the variable of interest by choosing from the drop-down options or by activating a different cell in the grid.
Users can also provide more specific feedback by manually highlighting relevant text spans which could support document's classification. Like most other text annotators, the selection span automatically moves to ends of a word boundary if it is left hanging in the middle of a word. Multiple words forming a phrase can also be highlighted to be sent as a feedback.
Since the users are free to select their own text spans, there is a scope for the feedbacks to be inconsistent or prove to be less useful to the learning system. As a result, we have designed a feedback mechanism using the wordtree to provide them with some guidance in selecting these spans. Here the root phrase takes the role of the highlight span. Phrases are added before and after the root word as the user drills down the tree and prunes it. We believe that the wordtree may be useful while providing feedback step for it allows the users to give feedback on several documents together. The users are able explore the different use cases of a phrase in all the documents in the dataset with a single glance. It also helps in identifying tighter and more generic feedback phrases with its click to drill down design. If the user is able to make a choice without having to view the complete sentence, we can identify phrases that may be more important for the machine learning system. The varying font sizes of the phrases provide strong visual cues about their frequency of use in the dataset and thus encourages the users to attend to more useful phrases for training first. Further feedback for multiple documents based on a single phrase may help avoid potential conflict scenarios where the user may have highlighted similar keywords but selected different classes for feedback. To summarize, the wordtree not only provides an overview of the entire dataset but the provision of interacting with it allows the users to work directly with phrases and sentences in the dataset. It helps them to browse the data easily, send feedback actions to the learning systems, and see results with the help of color gradients.
All of three kinds of feedback can be submitted from the yellow bar present at the top of the screen that shows available options depending on the context. For example, an option for providing a feedback like this appears as soon as a text span is selected in the document. The document view also provide a right click menu as an additional affordance for the users to send feedback.
2em
Re-Train
Re-training is the final step of the interactive machine learning cycle. The Re-Train view tab keeps a count of the number of feedbacks sent by the user and can be selected to view the list of proposed revisions to the model (Figure UID13 ). The list includes all three kinds of feedbacks. Clicking on the re-train button launches the re-training process. Once the retraining is complete, a new model is created and the system updates the predictions in the grid and the linked views. The grid view indicates all the differences between the old and the new model predictions. One can spot these changes in bold. These cells will also have a bold underline in the bottom. This allows the user to identify changes made in the model as a result of their feedback.
The Re-Train view also provides guidance for resolving potentially contradictory feedback items. For example, a user may provide a particular text span indicating that a given document-variable pair should be set to be true, even as they label it false in another feedback setting. In these cases, the system will return an error message specifying the problem, and highlight conflicting feedback items in red. These items can be revised or deleted from the Re-Train view, with red highlights disappearing when conflicts are resolved. Another conflict scenario involves the submission of suggested changes that undo the effects of earlier revisions to the model. These items are highlighted in yellow, and accompanied by an override option that will allow the newer input to take precedence over the earlier feedback. The repeated re-training steps allow the users to build the learning models over several iterations.
2em
Implementation and Deployment
The system is implemented as a client-server architecture with communication over HTTP(S) using JSON. The user-interface has been built using the Angular (angularjs.org), D3 BIBREF21 and jQuery (jquery.com) javascript frameworks and libraries. The NLP learning system manages the model building and is deployed as a Tomcat Server Application. The tool incorporates several other open-source libraries and packages, a list of which is available along with the source code at request.
2em
User Study
We conducted a formative user study to gain insight into usability factors of the tool that may be associated with errors or confusion, and to identify opportunities for improvement via re-design or implementation of new functionality.
2em
Participants
We adopted a snowball sampling technique starting with clinicians identified by our colleagues to recruit participants for the user study. We conducted a total of 4 (+1 pilot) studies lasting between 60 to 90 minutes. Our participants worked as both clinicians and clinical researchers and had at least an MD degree. All participants were experienced with both clinical text and the colonoscopy procedures. Their positions varied from research faculty members to physician scientists. Three out of the four participants had between 5-10 years of experience in that position. They had limited experience with machine learning algorithms with average self-reported proficiency being 5.0 on a scale of 1 to 10 (Individual ratings: 2, 5, 6, 7), where 1 is for “No knowledge at all", 5 – “Some idea about the algorithms", and 10 being “Can read and understand current research".
The pilot study helped us with some initial comments about the tool. This was done to identify any unnoticed bugs in our software prototype or any problems with our study protocol. Since we followed the same protocol in the pilot study as well and fixed only a couple of minor problems with the tool after it, we have also included its results with the rest of the studies.
2em
Study Protocol
We began with a pre-study survey to gauge background information about the participants and their expectations from the tool. We gave a short 15-minute walkthrough of the interface before handing over the control to them. During the study, the participants were asked review documents using the tool, and revise NLP predictions by providing feedback wherever required. We asked the participants to work and build models for only one of the variables – biopsy, indicating whether or not the report discussed a sample biopsy. Actual interactions with the tool lasted between 20-30 minutes for the 4 studies but was longer for the pilot. The participants worked with 280 documents for providing feedback for a model built against a set of 30 hand-annotated documents. We followed the “think aloud" method to record their comments and reactions while using the tool. Sessions were conducted over web-conferencing software, which was also used to capture audio, screen content, and mouse interactions. At the end of the study, we asked users to complete the System Usability Scale BIBREF22 and to answer some questions regarding their understanding of the tool.
Results
We used the System Usability Scale consisting of 10-questions on a 5-point Likert scale to help get a global view of subjective assessments of usability. The average SUS score was 70.5 out of 100. Individual scores are provided in Table 2 .
2em
We classified the collected observations from the think-aloud sessions and comments from the semi-structured interviews into four categories: 1) Workflow: Comments and observations as the participants navigated through the documents for review, 2) WordTree: While selecting search queries and browsing the wordtree, 3) Feedback: While providing feedback to the learning system, and 4) Re-training: Upon seeing changes after re-training. Some of these comments also include requests for new features by the participants which are also summarized in Table 1 .
Workflow: Participants used the grid view to select documents of interest and the document view to navigate through the text. They found this part of the workflow to be tedious. Some participants requested a “Next" button that could be used to quickly move to a new document, instead of clicking on the cells in the grid. However, one of the participants also provided a contrasting view, expressing appreciation for the flexibility offered by the tool in selecting the documents for labeling. They also made use of the color shades representing the prediction probability numbers to prioritize documents for inspection. They requested a sort feature in the grid view that could arrange the documents according to these probability scores as well.
WordTree: Perceptions about the wordtree were mixed. Some concerns regarding the wordtree appear to stem from the tabbed display that makes the user choose between the document view and the wordtree. While all participants found the wordtree to be a faster way to provide feedback, they felt that providing the feedback without being able to see the full document text at the same time might be error-prone. Although we were able to provide sentence long phrases in the tree and to show links to the full text of the relevant documents in the grid view, the participants were in favor of having a quicker way to access the complete reports. We have proposed a re-design to address this concern for future work. Our proposed redesign includes a provision for the user to make the WordTree view pop-out from its tab so that it can be used with the document and the grid views simultaneously.
Participants discovered an unexpected use case for this view. In addition to giving feedback, the wordtree allowed users to verify the quality of their models. This was a consequence of the gradient colors in it, which showed how the presence of individual keywords affect the classification of documents. By looking at how the gradient colors changed for the different keywords, the user could understand how well the model performed in predicting the values depending on the phrases contained in the document.
A common problem was that the users left the wordtree's search filter on even after they were done using it. The tool would filter the documents in the grid as the users navigated through the wordtree but would forget to clear it for the next round of analysis.
Feedback: Physicians indicated that they were accustomed to thinking in terms of rules suggesting a direct link between the feature and classification rather than the probabilistic associations used in our tool. As a result they were unsure at times about assigning a classification for a text span that serves as indicator in most but still not all of the cases. We address this problem in tool's design by encouraging models to be built iteratively. The user need not focus on building a completely accurate model at once but has an option to refine it for more specific cases in the future iterations. From the user study, however, we could not recommend any further design improvements that could make the users more comfortable with this workflow.
One missing feature pointed out by the users was the ability to select a phrase and say that it didn't contribute towards the classification of the documents, when it was being picked either as a true or a false feature by the learning system. Otherwise the participants found the tool's features very usable for sending feedback to the machine learning system.
Re-Training: We had suggested that participants could build as many models as they like, which led them to have doubts about the optimal frequency of retraining. Future work may use NLP metrics to automatically determine when to retrain. The participants indicated that they were pleased to see the grid show changes in predictions after their feedback. Another suggestion was to provide a built-in option to test their model against a held-out hand annotated testing set.
Overall we received very encouraging responses from the participants. Four out of five (including the pilot) expressed interest in having the tool made available for their own work right away. The remaining participant was not involved in any research involving study clinical text. During the pre-study interview, we asked participants about their ideas on such a tool before showing our prototype. One of the participants who is actively working on related colonoscopy research requested features like a web-based interface for collaborating with people who are at geographically separated locations, flexibility in selecting documents to annotate, and a feedback mechanism for NLP. Our prototype tool was able to satisfy his needs in all of these aspects.
2em
Discussion and Future Work
The initial feedback from the usability study provided both preliminary validation of the usability of the tool and guidance for improving the design of the tool. While we have not identified any major hurdle that would require a comprehensive re-design of any interface component, there are several extensions to the current set of features we believe might improve usability and will be promising in future work (Table 1 ).
One of the aims of this project was to explore the feasibility of using interactive review as a means of lowering the training requirements. We hypothesize that the manual review supported by this tool will enable rapid convergence on highly accurate models even by starting with smaller training sets. Testing this out in a statistically compelling manner has been left for a future empirical evaluation study. This would involve observing efficiency measures such as overall time spent and accuracy measures like F-Measure etc. under different variations of the tool. We may control the tool's presentation capabilities, types of feedback allowed and the number of training documents as the independent variables during this study. Another promising future direction would be to evaluate the use of the tool by several users in a collaborative work setting.
2em
Conclusion
Despite the promising results shown by repeated studies involving NLP on clinical records, the benefits of NLP are all too often inaccessible to the clinicians and practitioners. Moreover, we have seen from previous studies that extracting structured insights from clinical text is hard. Although NLP techniques work well they have been put into limited use by the researchers in the field. In particular, without access to usable tools for clinicians that can make it easier for them to review and revise NLP findings, it is difficult apply these techniques.
We have built a candidate tool to help address these problems. The interactive components of the tool along with novel visualization techniques support the entire interactive machine learning cycle with review, feedback and retraining steps. We conducted a user-study with prospective users as study participants to validate our design rationales. We also identified opportunities for improvement that will be addressed before we move forward with an empirical evaluation of the system.
2em
Acknowledgments
We thank our user study participants. We would also like to thank Dr. Ateev Mehrotra for providing the colonoscopy reports dataset. This research was supported by NIH grant 5R01LM010964. | Unanswerable |
8fa7011e7beaa9fb4083bf7dd75d1216f9c7b2eb | 8fa7011e7beaa9fb4083bf7dd75d1216f9c7b2eb_0 | Q: Do the authors test their annotation projection techniques on tasks other than AMR?
Text: Introduction
Abstract Meaning Representation (AMR) parsing is the process of converting natural language sentences into their corresponding AMR representations BIBREF0 . An AMR is a graph with nodes representing the concepts of the sentence and edges representing the semantic relations between them. Most available AMR datasets large enough to train statistical models consist of pairs of English sentences and AMR graphs.
The cross-lingual properties of AMR across languages has been the subject of preliminary discussions. The AMR guidelines state that AMR is not an interlingua BIBREF0 and bojar2014comparing categorizes different kinds of divergences in the annotation between English AMRs and Czech AMRs. xue2014not show that structurally aligning English AMRs with Czech and Chinese AMRs is not always possible but that refined annotation guidelines suffice to resolve some of these cases. We extend this line of research by exploring whether divergences among languages can be overcome, i.e., we investigate whether it is possible to maintain the AMR annotated for English as a semantic representation for sentences written in other languages, as in Figure 1 .
We implement AMR parsers for Italian, Spanish, German and Chinese using annotation projection, where existing annotations are projected from a source language (English) to a target language through a parallel corpus BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . By evaluating the parsers and manually analyzing their output, we show that the parsers are able to recover the AMR structures even when there exist structural differences between the languages, i.e., although AMR is not an interlingua it can act as one. This method also provides a quick way to prototype multilingual AMR parsers, assuming that Part-of-speech (POS) taggers, Named Entity Recognition (NER) taggers and dependency parsers are available for the target languages. We also propose an alternative approach, where Machine Translation (MT) is used to translate the input sentences into English so that an available English AMR parser can be employed. This method is an even quicker solution which only requires translation models between the target languages and English.
Due to the lack of gold standard in the target languages, we exploit the English data to evaluate the parsers for the target languages. (Henceforth, we will use the term target parser to indicate a parser for a target language.) We achieve this by first learning the target parser from the gold standard English parser, and then inverting this process to learn a new English parser from the target parser. We then evaluate the resulting English parser against the gold standard. We call this “full-cycle” evaluation.
Similarly to evangcross, we also directly evaluate the target parser on “silver” data, obtained by parsing the English side of a parallel corpus.
In order to assess the reliability of these evaluation methods, we collected gold standard datasets for Italian, Spanish, German and Chinese by acquiring professional translations of the AMR gold standard data to these languages. We hypothesize that the full-cycle score can be used as a more reliable proxy than the silver score for evaluating the target parser. We provide evidence to this claim by comparing the three evaluation procedures (silver, full-cycle, and gold) across languages and parsers.
Our main contributions are:
Cross-lingual AMR parsing
AMR is a semantic representation heavily biased towards English, where labels for nodes and edges are either English words or Propbank frames BIBREF5 . The goal of AMR is to abstract away from the syntactic realization of the original sentences while maintaining its underlying meaning. As a consequence, different phrasings of one sentence are expected to provide identical AMR representations. This canonicalization does not always hold across languages: two sentences that express the same meaning in two different languages are not guaranteed to produce identical AMR structures BIBREF6 , BIBREF7 . However, xue2014not show that in many cases the unlabeled AMRs are in fact shared across languages. We are encouraged by this finding and argue that it should be possible to develop algorithms that account for some of these differences when they arise. We therefore introduce a new problem, which we call cross-lingual AMR parsing: given a sentence in any language, the goal is to recover the AMR graph that was originally devised for its English translation. This task is harder than traditional AMR parsing as it requires to recover English labels as well as to deal with structural differences between languages, usually referred as translation divergence. We propose two initial solutions to this problem: by annotation projection and by machine translation.
Method 1: Annotation Projection
AMR is not grounded in the input sentence, therefore there is no need to change the AMR annotation when projecting to another language. We think of English labels for the graph nodes as ones from an independent language, which incidentally looks similar to English. However, in order to train state-of-the-art AMR parsers, we also need to project the alignments between AMR nodes and words in the sentence (henceforth called AMR alignments). We use word alignments, similarly to other annotation projection work, to project the AMR alignments to the target languages.
Our approach depends on an underlying assumption that we make: if a source word is word-aligned to a target word and it is AMR aligned with an AMR node, then the target word is also aligned to that AMR node. More formally, let $S = s_1 \dots s_{\vert s \vert }$ be the source language sentence and $T = t_1 \dots t_{\vert t \vert }$ be the target language sentence; $A_s(\cdot )$ be the AMR alignment mapping word tokens in $S$ to the set of AMR nodes that are triggered by it; $A_t(\cdot )$ be the same function for $T$ ; $v$ be a node in the AMR graph; and finally, $W(\cdot )$ be an alignment that maps a word in $S$ to a subset of words in $T$ . Then, the AMR projection assumption is: $T = t_1 \dots t_{\vert t \vert }$0
In the example of Figure 1 , Questa is word-aligned with This and therefore AMR-aligned with the node this, and the same logic applies to the other aligned words. The words is, the and of do not generate any AMR nodes, so we ignore their word alignments. We apply this method to project existing AMR annotations to other languages, which are then used to train the target parsers.
Method 2: Machine Translation
We invoke an MT system to translate the sentence into English so that we can use an available English parser to obtain its AMR graph. Naturally, the quality of the output graph depends on the quality of the translations. If the automatic translation is close to the reference translation, then the predicted AMR graph will be close to the reference AMR graph. It is therefore evident that this method is not informative in terms of the cross-lingual properties of AMR. However, its simplicity makes it a compelling engineering solution for parsing other languages.
Evaluation
We now turn to the problem of evaluation. Let us assume that we trained a parser for a target language, for example using the annotation projection method discussed in Section "Related Work" . In line with rapid development of new parsers, we assume that the only gold AMR dataset available is the one released for English.
We can generate a silver test set by running an automatic (English) AMR parser on the English side of a parallel corpus and use the output AMRs as references. However, the silver test set is affected by mistakes made by the English AMR parser, therefore it may not be reliable.
In order to perform the evaluation on a gold test set, we propose full-cycle evaluation: after learning the target parser from the English parser, we invert this process to learn a new English parser from the target parser, in the same way that we learned the target parser from the English parser. The resulting English parser is then evaluated against the (English) AMR gold standard. We hypothesize that the score of the new English parser can be used as a proxy to the score of the target parser.
To show whether the evaluation methods proposed can be used reliably, we also generated gold test AMR datasets for four target languages (Italian, Spanish, German and Chinese). In order to do so, we collected professional translations for the English sentences in the AMR test set. We were then able to create pairs of human-produced sentences with human-produced AMR graphs.
A diagram summarizing the different evaluation stages is shown in Figure 2 . In the case of MT-based systems, the full-cycle corresponds to first translating from English to the target language and then back to English (back-translation), and only then parsing the sentences with the English AMR parser. At the end of this process, a noisy version of the original sentence will be returned and its parsed graph will be a noisy version of the graph parsed from the original sentence.
Experiments
We run experiments on four languages: Italian, Spanish, German and Chinese. We use Europarl BIBREF8 as the parallel corpus for Italian, Spanish and German, containing around 1.9M sentences for each language pair. For Chinese, we use the first 2M sentences from the United Nations Parallel Corpus BIBREF9 . For each target language we extract two parallel datasets of 20,000/2,000/2,000 (train/dev/test) sentences for the two step of the annotation projection (English $\rightarrow $ target and target $\rightarrow $ English). These are used to train the AMR parsers. The projection approach also requires training the word alignments, for which we use all the remaining sentences from the parallel corpora (Europarl for Spanish/German/Italian and UN Parallel Corpus for Chinese). These are also the sentences we use to train the MT models. The gold AMR dataset is LDC2015E86, containing 16,833 training sentences, 1,368 development sentences, and 1,371 testing sentences.
Word alignments were generated using fast_align BIBREF10 , while AMR alignments were generated with JAMR BIBREF11 . AMREager BIBREF12 was chosen as the pre-existing English AMR parser. AMREager is an open-source AMR parser that needs only minor modifications for re-use with other languages. Our multilingual adaptation of AMREager is available at http://www.github.com/mdtux89/amr-eager-multilingual. It requires tokenization, POS tagging, NER tagging and dependency parsing, which for English, German and Chinese are provided by CoreNLP BIBREF13 . We use Freeling BIBREF14 for Spanish, as CoreNLP does not provide dependency parsing for this language. Italian is not supported in CoreNLP: we use Tint BIBREF15 , a CoreNLP-compatible NLP pipeline for Italian.
In order to experiment with the approach of Section "Conclusions" , we experimented with translations from Google Translate. As Google Translate has access to a much larger training corpus, we also trained baseline MT models using Moses BIBREF16 and Nematus BIBREF17 , with the same training data we use for the projection method and default hyper-parameters.
Smatch BIBREF18 is used to evaluate AMR parsers. It looks for the best alignment between the predicted AMR and the reference AMR and it then computes precision, recall and $F_1$ of their edges. The original English parser achieves 65% Smatch score on the test split of LDC2015E86. Full-cycle and gold evaluations use the same dataset, while silver evaluation is performed on the split of the parallel corpora we reserved for testing. Results are shown in Table 1 . The Google Translate system outperforms all other systems, but is not directly comparable to them, as it has the unfair advantage of being trained on a much larger dataset. Due to noisy JAMR alignments and silver training data involved in the annotation projection approach, the MT-based systems give in general better parsing results. The BLEU scores of all translation systems are shown in Table 2 .
There are several sources of noise in the annotation projection method, which affect the parsing results: 1) the parsers are trained on silver data obtained by an automatic parser for English; 2) the projection uses noisy word alignments; 3) the AMR alignments on the source side are also noisy; 4) translation divergences exist between the languages, making it sometimes difficult to project the annotation without loss of information.
Qualitative Analysis
Figure 3 shows examples of output parses for all languages, including the AMR alignments by-product of the parsing process, that we use to discuss the mistakes made by the parsers.
In the Italian example, the only evident error is that Infine (Lastly) should be ignored. In the Spanish example, the word medida (measure) is wrongly ignored: it should be used to generate a child of the node impact-01. Some of the :ARG roles are also not correct. In the German example, meines (my) should reflect the fact that the speaker is talking about his own country. Finally, in the Chinese example, there are several mistakes including yet another concept identification mistake: intend-01 is erroneously triggered.
Most mistakes involve concept identification. In particular, relevant words are often erroneously ignored by the parser. This is directly related to the problem of noisy word alignments in annotation projection: the parser learns what words are likely to trigger a node (or a set of nodes) in the AMR by looking at their AMR alignments (which are induced by the word alignments). If an important word consistently remains unaligned, the parser will erroneously learn to discard it. More accurate alignments are therefore crucial in order to achieve better parsing results. We computed the percentage of words in the training data that are learned to be non-content-bearing in each parser and we found that the Chinese parser, which is our least accurate parser, is the one that most suffer from this, with 33% non-content-bearing words. On the other hand, in the German parser, which is the highest scoring, only 26% of the words are non-content-bearing, which is the lowest percentage amongst all parsers.
Translational Divergence
In order to investigate the hypothesis that AMR can be shared across these languages, we now look at translational divergence and discuss how it affects parsing, following the classification used in previous work BIBREF19 , BIBREF20 , which identifies classes of divergences for several languages. sulem2015conceptual also follow the same categorization for French.
Figure 4 shows six sentences displaying these divergences. The aim of this analysis is to assess how the parsers deal with the different kind of translational divergences, regardless of the overall quality of the output.
This divergence happens when two languages use different POS tags to express the same meaning. For example, the English sentence I am jealous of you is translated into Spanish as Tengo envidia de ti (I have jealousy of you). The English adjective jealous is translated in the Spanish noun envidia. In Figure 4 a we note that the categorical divergence does not create problems since the parsers correctly recognized that envidia (jealousy/envy) should be used as the predicate, regardless of its POS.
This divergence happens when verbs expressed in a language with a single word can be expressed with more words in another language. Two subtypes are distinguished: manner and light verb. Manner refers to a manner verb that is mapped to a motion verb plus a manner-bearing word. For example, We will answer is translated in the Italian sentence Noi daremo una riposta (We will give an answer), where to answer is translated as daremo una risposta (will give an answer). Figure 4 b shows that the Italian parser generates a sensible output for this sentence by creating a single node labeled answer-01 for the expression dare una riposta.
In a light verb conflational divergence, a verb is mapped to a light verb plus an additional meaning unit, such as when I fear is translated as Io ho paura (I have fear) in Italian: to fear is mapped to the light verb ho (have) plus the noun paura (fear). Figure 4 e shows that also this divergence is dealt properly by the Italian parser: ho paura correctly triggers the root fear-01.
This divergence happens when verb arguments result in different syntactic configurations, for example, due to an additional PP attachment. When translating He entered the house with Lui è entrato nella casa (He entered in the house), the Italian translation has an additional in preposition. Also this parsed graph, in Figure 4 c, is structurally correct. The missing node he is due to pronoun-dropping, which is frequent in Italian.
This divergence occurs when the direction of the dependency between two words is inverted. For example, I like eating, where like is head of eating, becomes Ich esse gern (I eat likingly) in German, where the dependency is inverted. Unlike all other examples, in this case, the German parser does not cope well with this divergence: it is unable to recognize like-01 as the main concept in the sentence, as shown in Figure 4 d.
Finally, the parse of Figure 4 f has to deal with a thematic divergence, which happens when the semantic roles of a predicate are inverted. In the sentence I like grapes, translated to Spanish as Me gustan uvas, I is the subject in English while Me is the object in Spanish. Even though we note an erroneous reentrant edge between grape and I, the thematic divergence does not create problems: the parser correctly recognizes the :ARG0 relationship between like-01 and I and the :ARG1 relationship between like-01 and grape. In this case, the edge labels are important, as this type of divergence is concerned with the semantic roles.
Related Work
AMR parsing for languages other than English has made only a few steps forward. In previous work BIBREF22 , BIBREF7 , BIBREF6 , nodes of the target graph were labeled with either English words or with words in the target language. We instead use the AMR annotation used for English for the target language as well, without translating any word. To the best of our knowledge, the only previous work that attempts to automatically parse AMR graphs for non-English sentences is by vanderwende2015amr. Sentences in several languages (French, German, Spanish and Japanese) are parsed into a logical representation, which is then converted to AMR using a small set of rules. A comparison with this work is difficult, as the authors do not report results for the parsers (due to the lack of an annotated corpus) or release their code.
Besides AMR, other semantic parsing frameworks for non-English languages have been investigated BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 . evangcross is the most closely related to our work as it uses a projection mechanism similar to ours for CCG. A crucial difference is that, in order to project CCG parse trees to the target languages, they only make use of literal translation. Previous work has also focused on assessing the stability across languages of semantic frameworks such as AMR BIBREF7 , BIBREF6 , UCCA BIBREF27 and Propbank BIBREF28 .
Cross-lingual techniques can cope with the lack of labeled data on languages when this data is available in at least one language, usually English. The annotation projection method, which we follow in this work, is one way to address this problem. It was introduced for POS tagging, base noun phrase bracketing, NER tagging, and inflectional morphological analysis BIBREF29 but it has also been used for dependency parsing BIBREF30 , role labeling BIBREF31 , BIBREF32 and semantic parsing BIBREF26 . Another common thread of cross-lingual work is model transfer, where parameters are shared across languages BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 .
Conclusions
We introduced the problem of parsing AMR structures, annotated for English, from sentences written in other languages as a way to test the cross-lingual properties of AMR. We provided evidence that AMR can be indeed shared across the languages tested and that it is possible to overcome translational divergences. We further proposed a novel way to evaluate the target parsers that does not require manual annotations of the target language. The full-cycle procedure is not limited to AMR parsing and could be used for other cross-lingual problems in NLP. The results of the projection-based AMR parsers indicate that there is a vast room for improvements, especially in terms of generating better alignments. We encourage further work in this direction by releasing professional translations of the AMR test set into four languages.
Acknowledgments
The authors would like to thank the three anonymous reviewers and Sameer Bansal, Gozde Gul Sahin, Sorcha Gilroy, Ida Szubert, Esma Balkir, Nikos Papasarantopoulos, Joana Ribeiro, Shashi Narayan, Toms Bergmanis, Clara Vania, Yang Liu and Adam Lopez for their helpful comments. This research was supported by a grant from Bloomberg and by the H2020 project SUMMA, under grant agreement 688139. | No |
e0b7acf4292b71725b140f089c6850aebf2828d2 | e0b7acf4292b71725b140f089c6850aebf2828d2_0 | Q: How is annotation projection done when languages have different word order?
Text: Introduction
Abstract Meaning Representation (AMR) parsing is the process of converting natural language sentences into their corresponding AMR representations BIBREF0 . An AMR is a graph with nodes representing the concepts of the sentence and edges representing the semantic relations between them. Most available AMR datasets large enough to train statistical models consist of pairs of English sentences and AMR graphs.
The cross-lingual properties of AMR across languages has been the subject of preliminary discussions. The AMR guidelines state that AMR is not an interlingua BIBREF0 and bojar2014comparing categorizes different kinds of divergences in the annotation between English AMRs and Czech AMRs. xue2014not show that structurally aligning English AMRs with Czech and Chinese AMRs is not always possible but that refined annotation guidelines suffice to resolve some of these cases. We extend this line of research by exploring whether divergences among languages can be overcome, i.e., we investigate whether it is possible to maintain the AMR annotated for English as a semantic representation for sentences written in other languages, as in Figure 1 .
We implement AMR parsers for Italian, Spanish, German and Chinese using annotation projection, where existing annotations are projected from a source language (English) to a target language through a parallel corpus BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . By evaluating the parsers and manually analyzing their output, we show that the parsers are able to recover the AMR structures even when there exist structural differences between the languages, i.e., although AMR is not an interlingua it can act as one. This method also provides a quick way to prototype multilingual AMR parsers, assuming that Part-of-speech (POS) taggers, Named Entity Recognition (NER) taggers and dependency parsers are available for the target languages. We also propose an alternative approach, where Machine Translation (MT) is used to translate the input sentences into English so that an available English AMR parser can be employed. This method is an even quicker solution which only requires translation models between the target languages and English.
Due to the lack of gold standard in the target languages, we exploit the English data to evaluate the parsers for the target languages. (Henceforth, we will use the term target parser to indicate a parser for a target language.) We achieve this by first learning the target parser from the gold standard English parser, and then inverting this process to learn a new English parser from the target parser. We then evaluate the resulting English parser against the gold standard. We call this “full-cycle” evaluation.
Similarly to evangcross, we also directly evaluate the target parser on “silver” data, obtained by parsing the English side of a parallel corpus.
In order to assess the reliability of these evaluation methods, we collected gold standard datasets for Italian, Spanish, German and Chinese by acquiring professional translations of the AMR gold standard data to these languages. We hypothesize that the full-cycle score can be used as a more reliable proxy than the silver score for evaluating the target parser. We provide evidence to this claim by comparing the three evaluation procedures (silver, full-cycle, and gold) across languages and parsers.
Our main contributions are:
Cross-lingual AMR parsing
AMR is a semantic representation heavily biased towards English, where labels for nodes and edges are either English words or Propbank frames BIBREF5 . The goal of AMR is to abstract away from the syntactic realization of the original sentences while maintaining its underlying meaning. As a consequence, different phrasings of one sentence are expected to provide identical AMR representations. This canonicalization does not always hold across languages: two sentences that express the same meaning in two different languages are not guaranteed to produce identical AMR structures BIBREF6 , BIBREF7 . However, xue2014not show that in many cases the unlabeled AMRs are in fact shared across languages. We are encouraged by this finding and argue that it should be possible to develop algorithms that account for some of these differences when they arise. We therefore introduce a new problem, which we call cross-lingual AMR parsing: given a sentence in any language, the goal is to recover the AMR graph that was originally devised for its English translation. This task is harder than traditional AMR parsing as it requires to recover English labels as well as to deal with structural differences between languages, usually referred as translation divergence. We propose two initial solutions to this problem: by annotation projection and by machine translation.
Method 1: Annotation Projection
AMR is not grounded in the input sentence, therefore there is no need to change the AMR annotation when projecting to another language. We think of English labels for the graph nodes as ones from an independent language, which incidentally looks similar to English. However, in order to train state-of-the-art AMR parsers, we also need to project the alignments between AMR nodes and words in the sentence (henceforth called AMR alignments). We use word alignments, similarly to other annotation projection work, to project the AMR alignments to the target languages.
Our approach depends on an underlying assumption that we make: if a source word is word-aligned to a target word and it is AMR aligned with an AMR node, then the target word is also aligned to that AMR node. More formally, let $S = s_1 \dots s_{\vert s \vert }$ be the source language sentence and $T = t_1 \dots t_{\vert t \vert }$ be the target language sentence; $A_s(\cdot )$ be the AMR alignment mapping word tokens in $S$ to the set of AMR nodes that are triggered by it; $A_t(\cdot )$ be the same function for $T$ ; $v$ be a node in the AMR graph; and finally, $W(\cdot )$ be an alignment that maps a word in $S$ to a subset of words in $T$ . Then, the AMR projection assumption is: $T = t_1 \dots t_{\vert t \vert }$0
In the example of Figure 1 , Questa is word-aligned with This and therefore AMR-aligned with the node this, and the same logic applies to the other aligned words. The words is, the and of do not generate any AMR nodes, so we ignore their word alignments. We apply this method to project existing AMR annotations to other languages, which are then used to train the target parsers.
Method 2: Machine Translation
We invoke an MT system to translate the sentence into English so that we can use an available English parser to obtain its AMR graph. Naturally, the quality of the output graph depends on the quality of the translations. If the automatic translation is close to the reference translation, then the predicted AMR graph will be close to the reference AMR graph. It is therefore evident that this method is not informative in terms of the cross-lingual properties of AMR. However, its simplicity makes it a compelling engineering solution for parsing other languages.
Evaluation
We now turn to the problem of evaluation. Let us assume that we trained a parser for a target language, for example using the annotation projection method discussed in Section "Related Work" . In line with rapid development of new parsers, we assume that the only gold AMR dataset available is the one released for English.
We can generate a silver test set by running an automatic (English) AMR parser on the English side of a parallel corpus and use the output AMRs as references. However, the silver test set is affected by mistakes made by the English AMR parser, therefore it may not be reliable.
In order to perform the evaluation on a gold test set, we propose full-cycle evaluation: after learning the target parser from the English parser, we invert this process to learn a new English parser from the target parser, in the same way that we learned the target parser from the English parser. The resulting English parser is then evaluated against the (English) AMR gold standard. We hypothesize that the score of the new English parser can be used as a proxy to the score of the target parser.
To show whether the evaluation methods proposed can be used reliably, we also generated gold test AMR datasets for four target languages (Italian, Spanish, German and Chinese). In order to do so, we collected professional translations for the English sentences in the AMR test set. We were then able to create pairs of human-produced sentences with human-produced AMR graphs.
A diagram summarizing the different evaluation stages is shown in Figure 2 . In the case of MT-based systems, the full-cycle corresponds to first translating from English to the target language and then back to English (back-translation), and only then parsing the sentences with the English AMR parser. At the end of this process, a noisy version of the original sentence will be returned and its parsed graph will be a noisy version of the graph parsed from the original sentence.
Experiments
We run experiments on four languages: Italian, Spanish, German and Chinese. We use Europarl BIBREF8 as the parallel corpus for Italian, Spanish and German, containing around 1.9M sentences for each language pair. For Chinese, we use the first 2M sentences from the United Nations Parallel Corpus BIBREF9 . For each target language we extract two parallel datasets of 20,000/2,000/2,000 (train/dev/test) sentences for the two step of the annotation projection (English $\rightarrow $ target and target $\rightarrow $ English). These are used to train the AMR parsers. The projection approach also requires training the word alignments, for which we use all the remaining sentences from the parallel corpora (Europarl for Spanish/German/Italian and UN Parallel Corpus for Chinese). These are also the sentences we use to train the MT models. The gold AMR dataset is LDC2015E86, containing 16,833 training sentences, 1,368 development sentences, and 1,371 testing sentences.
Word alignments were generated using fast_align BIBREF10 , while AMR alignments were generated with JAMR BIBREF11 . AMREager BIBREF12 was chosen as the pre-existing English AMR parser. AMREager is an open-source AMR parser that needs only minor modifications for re-use with other languages. Our multilingual adaptation of AMREager is available at http://www.github.com/mdtux89/amr-eager-multilingual. It requires tokenization, POS tagging, NER tagging and dependency parsing, which for English, German and Chinese are provided by CoreNLP BIBREF13 . We use Freeling BIBREF14 for Spanish, as CoreNLP does not provide dependency parsing for this language. Italian is not supported in CoreNLP: we use Tint BIBREF15 , a CoreNLP-compatible NLP pipeline for Italian.
In order to experiment with the approach of Section "Conclusions" , we experimented with translations from Google Translate. As Google Translate has access to a much larger training corpus, we also trained baseline MT models using Moses BIBREF16 and Nematus BIBREF17 , with the same training data we use for the projection method and default hyper-parameters.
Smatch BIBREF18 is used to evaluate AMR parsers. It looks for the best alignment between the predicted AMR and the reference AMR and it then computes precision, recall and $F_1$ of their edges. The original English parser achieves 65% Smatch score on the test split of LDC2015E86. Full-cycle and gold evaluations use the same dataset, while silver evaluation is performed on the split of the parallel corpora we reserved for testing. Results are shown in Table 1 . The Google Translate system outperforms all other systems, but is not directly comparable to them, as it has the unfair advantage of being trained on a much larger dataset. Due to noisy JAMR alignments and silver training data involved in the annotation projection approach, the MT-based systems give in general better parsing results. The BLEU scores of all translation systems are shown in Table 2 .
There are several sources of noise in the annotation projection method, which affect the parsing results: 1) the parsers are trained on silver data obtained by an automatic parser for English; 2) the projection uses noisy word alignments; 3) the AMR alignments on the source side are also noisy; 4) translation divergences exist between the languages, making it sometimes difficult to project the annotation without loss of information.
Qualitative Analysis
Figure 3 shows examples of output parses for all languages, including the AMR alignments by-product of the parsing process, that we use to discuss the mistakes made by the parsers.
In the Italian example, the only evident error is that Infine (Lastly) should be ignored. In the Spanish example, the word medida (measure) is wrongly ignored: it should be used to generate a child of the node impact-01. Some of the :ARG roles are also not correct. In the German example, meines (my) should reflect the fact that the speaker is talking about his own country. Finally, in the Chinese example, there are several mistakes including yet another concept identification mistake: intend-01 is erroneously triggered.
Most mistakes involve concept identification. In particular, relevant words are often erroneously ignored by the parser. This is directly related to the problem of noisy word alignments in annotation projection: the parser learns what words are likely to trigger a node (or a set of nodes) in the AMR by looking at their AMR alignments (which are induced by the word alignments). If an important word consistently remains unaligned, the parser will erroneously learn to discard it. More accurate alignments are therefore crucial in order to achieve better parsing results. We computed the percentage of words in the training data that are learned to be non-content-bearing in each parser and we found that the Chinese parser, which is our least accurate parser, is the one that most suffer from this, with 33% non-content-bearing words. On the other hand, in the German parser, which is the highest scoring, only 26% of the words are non-content-bearing, which is the lowest percentage amongst all parsers.
Translational Divergence
In order to investigate the hypothesis that AMR can be shared across these languages, we now look at translational divergence and discuss how it affects parsing, following the classification used in previous work BIBREF19 , BIBREF20 , which identifies classes of divergences for several languages. sulem2015conceptual also follow the same categorization for French.
Figure 4 shows six sentences displaying these divergences. The aim of this analysis is to assess how the parsers deal with the different kind of translational divergences, regardless of the overall quality of the output.
This divergence happens when two languages use different POS tags to express the same meaning. For example, the English sentence I am jealous of you is translated into Spanish as Tengo envidia de ti (I have jealousy of you). The English adjective jealous is translated in the Spanish noun envidia. In Figure 4 a we note that the categorical divergence does not create problems since the parsers correctly recognized that envidia (jealousy/envy) should be used as the predicate, regardless of its POS.
This divergence happens when verbs expressed in a language with a single word can be expressed with more words in another language. Two subtypes are distinguished: manner and light verb. Manner refers to a manner verb that is mapped to a motion verb plus a manner-bearing word. For example, We will answer is translated in the Italian sentence Noi daremo una riposta (We will give an answer), where to answer is translated as daremo una risposta (will give an answer). Figure 4 b shows that the Italian parser generates a sensible output for this sentence by creating a single node labeled answer-01 for the expression dare una riposta.
In a light verb conflational divergence, a verb is mapped to a light verb plus an additional meaning unit, such as when I fear is translated as Io ho paura (I have fear) in Italian: to fear is mapped to the light verb ho (have) plus the noun paura (fear). Figure 4 e shows that also this divergence is dealt properly by the Italian parser: ho paura correctly triggers the root fear-01.
This divergence happens when verb arguments result in different syntactic configurations, for example, due to an additional PP attachment. When translating He entered the house with Lui è entrato nella casa (He entered in the house), the Italian translation has an additional in preposition. Also this parsed graph, in Figure 4 c, is structurally correct. The missing node he is due to pronoun-dropping, which is frequent in Italian.
This divergence occurs when the direction of the dependency between two words is inverted. For example, I like eating, where like is head of eating, becomes Ich esse gern (I eat likingly) in German, where the dependency is inverted. Unlike all other examples, in this case, the German parser does not cope well with this divergence: it is unable to recognize like-01 as the main concept in the sentence, as shown in Figure 4 d.
Finally, the parse of Figure 4 f has to deal with a thematic divergence, which happens when the semantic roles of a predicate are inverted. In the sentence I like grapes, translated to Spanish as Me gustan uvas, I is the subject in English while Me is the object in Spanish. Even though we note an erroneous reentrant edge between grape and I, the thematic divergence does not create problems: the parser correctly recognizes the :ARG0 relationship between like-01 and I and the :ARG1 relationship between like-01 and grape. In this case, the edge labels are important, as this type of divergence is concerned with the semantic roles.
Related Work
AMR parsing for languages other than English has made only a few steps forward. In previous work BIBREF22 , BIBREF7 , BIBREF6 , nodes of the target graph were labeled with either English words or with words in the target language. We instead use the AMR annotation used for English for the target language as well, without translating any word. To the best of our knowledge, the only previous work that attempts to automatically parse AMR graphs for non-English sentences is by vanderwende2015amr. Sentences in several languages (French, German, Spanish and Japanese) are parsed into a logical representation, which is then converted to AMR using a small set of rules. A comparison with this work is difficult, as the authors do not report results for the parsers (due to the lack of an annotated corpus) or release their code.
Besides AMR, other semantic parsing frameworks for non-English languages have been investigated BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 . evangcross is the most closely related to our work as it uses a projection mechanism similar to ours for CCG. A crucial difference is that, in order to project CCG parse trees to the target languages, they only make use of literal translation. Previous work has also focused on assessing the stability across languages of semantic frameworks such as AMR BIBREF7 , BIBREF6 , UCCA BIBREF27 and Propbank BIBREF28 .
Cross-lingual techniques can cope with the lack of labeled data on languages when this data is available in at least one language, usually English. The annotation projection method, which we follow in this work, is one way to address this problem. It was introduced for POS tagging, base noun phrase bracketing, NER tagging, and inflectional morphological analysis BIBREF29 but it has also been used for dependency parsing BIBREF30 , role labeling BIBREF31 , BIBREF32 and semantic parsing BIBREF26 . Another common thread of cross-lingual work is model transfer, where parameters are shared across languages BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 .
Conclusions
We introduced the problem of parsing AMR structures, annotated for English, from sentences written in other languages as a way to test the cross-lingual properties of AMR. We provided evidence that AMR can be indeed shared across the languages tested and that it is possible to overcome translational divergences. We further proposed a novel way to evaluate the target parsers that does not require manual annotations of the target language. The full-cycle procedure is not limited to AMR parsing and could be used for other cross-lingual problems in NLP. The results of the projection-based AMR parsers indicate that there is a vast room for improvements, especially in terms of generating better alignments. We encourage further work in this direction by releasing professional translations of the AMR test set into four languages.
Acknowledgments
The authors would like to thank the three anonymous reviewers and Sameer Bansal, Gozde Gul Sahin, Sorcha Gilroy, Ida Szubert, Esma Balkir, Nikos Papasarantopoulos, Joana Ribeiro, Shashi Narayan, Toms Bergmanis, Clara Vania, Yang Liu and Adam Lopez for their helpful comments. This research was supported by a grant from Bloomberg and by the H2020 project SUMMA, under grant agreement 688139. | Word alignments are generated for parallel text, and aligned words are assumed to also share AMR node alignments. |
b6ffa18d49e188c454188669987b0a4807ca3018 | b6ffa18d49e188c454188669987b0a4807ca3018_0 | Q: What is the reasoning method that is used?
Text: Introduction
Ontology-based knowledge bases (KBs) like DBpedia BIBREF0 are playing an increasingly important role in domains such knowledge management, data analysis and natural language understanding. Although they are very valuable resources, the usefulness and usability of such KBs is limited by various quality issues BIBREF1 , BIBREF2 , BIBREF3 . One such issue is the use of string literals (both explicitly typed and plain literals) instead of semantically typed entities; for example in the triple $\langle $ River_Thames, passesArea, “Port Meadow, Oxford" $\rangle $ . This weakens the KB as it does not capture the semantics of such literals. If, in contrast, the object of the triple were an entity, then this entity could, e.g., be typed as Wetland and Park, and its location given as Oxford. This problem is pervasive and hence results in a significant loss of information: according to statistics from Gunaratna et al. BIBREF4 in 2016, the DBpedia property dbp:location has over 105,000 unique string literals that could be matched with entities. Besides DBpedia, such literals can also be found in some other KBs from encyclopedias (e.g., zhishi.me BIBREF5 ), in RDF graphs transformed from tabular data (e.g., LinkedGeoData BIBREF6 ), in aligned or evolving KBs, etc.
One possible remedy for this problem is to apply automated semantic typing and entity matching (AKA canonicalization) to such literals. To the best of our knowledge, semantic typing of KB literals has rarely been studied. Gunaratna et al. BIBREF4 used semantic typing in their entity summarization method, first identifying the so called focus term of a phrase via grammatical structure analysis, and then matching the focus term with both KB types and entities. Their method is, however, rather simplistic: it neither utilizes the literal's context, such as the associated property and subject, nor captures the contextual meaning of the relevant words. What has been widely studied is the semantic annotation of KB entities BIBREF7 , BIBREF8 , BIBREF9 and of noun phrases outside the KB (e.g., from web tables) BIBREF10 , BIBREF11 , BIBREF12 ; in such cases, however, the context is very different, and entity typing can, for example, exploit structured information such as the entity's linked Wikipedia page BIBREF7 and the domain and range of properties that the entity is associated with BIBREF8 .
With the development of deep learning, semantic embedding and feature learning have been widely adopted for exploring different kinds of contextual semantics in prediction, with Recurrent Neural Network (RNN) being a state-of-the-art method for dealing with structured data and text. One well known example is word2vec — an RNN language model which can represent words in a vector space that retains their meaning BIBREF13 . Another example is a recent study by Kartsaklis et al. BIBREF14 , which maps text to KB entities with a Long-short Term Memory RNN for textual feature learning. These methods offer the potential for developing accurate prediction-based methods for KB literal typing and entity matching where the contextual semantics is fully exploited.
In this study, we investigate KB literal canonicalization using a combination of RNN-based learning and semantic technologies. We first predict the semantic types of a literal by: (i) identifying candidate classes via lexical entity matching and KB queries; (ii) automatically generating positive and negative examples via KB sampling, with external semantics (e.g., from other KBs) injected for improved quality; (iii) training classifiers using relevant subject-predicate-literal triples embedded in an attentive bidirectional RNN (AttBiRNN); and (iv) using the trained classifiers and KB class hierarchy to predict candidate types. The novelty of our framework lies in its knowledge-based learning; this includes automatic candidate class extraction and sampling from the KB, triple embedding with different importance degrees suggesting different semantics, and using the predicted types to identify a potential canonical entity from the KB. We have evaluated our framework using a synthetic literal set (S-Lite) and a real literal set (R-Lite) from DBpedia BIBREF0 . The results are very promising, with significant improvements over several baselines, including the existing state-of-the-art.
Problem Statement
In this study we consider a knowledge base (KB) that includes both ontological axioms that induce (at least) a hierarchy of semantic types (i.e., classes), and assertions that describe concrete entities (individuals). Each such assertion is assumed to be in the form of an RDF triple $\langle s,p,o \rangle $ , where $s$ is an entity, $p$ is a property and $o$ can be either an entity or a literal (i.e., a typed or untyped data value such as a string or integer).
We focus on triples of the form $\langle s,p,l \rangle $ , where $l$ is a string literal; such literals can be identified by regular expressions, as in BIBREF4 , or by data type inference as in BIBREF15 . Our aim is to cononicalize $l$ by first identifying the type of $l$ , i.e., a set of classes $\mathcal {C}_l$ that an entity corresponding to $l$ should be an instance of, and then determining if such an entity already exists in the KB. The first subtask is modeled as a machine learning classification problem where a real value score in $\left[0,1\right]$ is assigned to each class $c$ occurring in the KB, and $\mathcal {C}_l$ is the set of classes determined by the assigned score with strategies e.g., adopting a class if its score exceeds some threshold. The second subtask is modeled as an entity lookup problem constrained by $\mathcal {C}_l$ .
It is important to note that:
When we talk about a literal $l$ we mean the occurrence of $l$ in a triple $\langle s,p,l \rangle $ . Lexically equivalent literals might be treated very differently depending on their triple contexts.
If the KB is an OWL DL ontology, then the set of object properties (which connect two entities) and data properties (which connect an entity to a literal) should be disjoint. In practice, however, KBs such as DBpedia often don't respect this constraint. In any case, we avoid the issue by simply computing the relevant typing and canonicalization information, and leaving it up to applications as to how they want to exploit it.
We assume that no manual annotations or external labels are given — the classifier is automatically trained using the KB.
Technical Framework
The technical framework for the classification problem is shown in Fig. 1 . It involves three main steps: (i) candidate class extraction; (ii) model training and prediction; and (iii) literal typing and canonicalization.
Popular KBs like DBpedia often contain a large number of classes. For efficiency reasons, and to reduce noise in the learning process, we first identify a subset of candidate classes. This selection should be rather inclusive so as to maximize potential recall. In order to achieve this we pool the candidate classes for all literals occurring in triples with a given property; i.e., to compute the candidate classes for a literal $ł$ occurring in a triple $\langle s,p,l \rangle $ , we consider all triples that use property $p$ . Note that, as discussed above, in practice such triples may include both literals and entities as their objects. We thus use two techniques for identifying candidate classes from the given set of triples. In the case where the object of the triple is an entity, the candidates are just the set of classes that this entity is an instance of. In practice we identify the candidates for the set of all such entities, which we denote $E_P$ , via a SPARQL query to the KB, with the resulting set of classes being denoted $C_P$ . In the case where the object of the triple is a literal, we first match the literal to entities using a lexical index which is built based on the entity's name, labels and anchor text (description). To maximize recall, the literal, its tokens (words) and its sub-phrases are used to retrieve entities by lexical matching; this technique is particularly effective when the literal is a long phrase. As in the first case, we identify all relevant entities, which we denote $E_M$ , and then retrieve the relevant classes $C_M$ using a SPARQL query. The candidate class set is simply the union of $C_P$ and $C_M$ , denoted as $C_{PM}$ .
We adopt the strategy of training one binary classifier for each candidate class, instead of multi-class classification, so as to facilitate dealing with the class hierarchy BIBREF16 . The classifier architecture includes an input layer with word embedding, an encoding layer with bidirectional RNNs, an attention layer and a fully connected (FC) layer for modeling the contextual semantics of the literal. To train a classifier, both positive and negative entities (samples), including those from $E_M$ (particular samples) and those outside $E_M$ (general samples) are extracted from the KB, with external KBs and logical constraints being used to improve sample quality. The trained classifiers are used to compute a score for each candidate class.
The final stage is to semantically type and, where possible, canonicalise literals. For a given literal, two strategies, independent and hierarchical, are used to determine its types (classes), with a score for each type. We then use these types and scores to try to identify an entity in the KB that could reasonably be substituted for the literal.
Prediction Model
Given a phrase literal $l$ and its associated RDF triple $\langle s, p, l \rangle $ , our neural network model aims at utilizing the semantics of $s$ , $p$ and $l$ for the classification of $l$ . The architecture is shown in Fig. 2 . It first separately parses the subject label, the property label and the literal into three word (token) sequences whose lengths, denoted as $T_s$ , $T_p$ and $T_l$ , are fixed to the maximum subject, property and literal sequence lengths from the training data by padding shorter sequences with null words. We then concatenate the three sequences into a single word sequence ( $word_t, t \in \left[1,T\right]$ ), where $\langle s, p, l \rangle $0 . Each word is then encoded into a vector via word embedding (null is encoded into a zero vector), and the word sequence is transformed into a vector sequence ( $\langle s, p, l \rangle $1 ). Note that this preserves information about the position of words in $\langle s, p, l \rangle $2 , $\langle s, p, l \rangle $3 and $\langle s, p, l \rangle $4 .
The semantics of forward and backward surrounding words is effective in predicting a word's semantics. For example, “Port” and “Meadow” are more likely to indicate a place as they appear after “Area” and before “Oxford”. To embed such contextual semantics into a feature vector, we stack a layer composed of bidirectional Recurrent Neural Networks (BiRNNs) with Gated Recurrent Unit (GRU) BIBREF17 . Within each RNN, a reset gate $r_t$ is used to control the contribution of the past word, and an update gate $z_t$ is used to balance the contributions of the past words and the new words. The hidden state (embedding) at position $t$ is computed as
$${\left\lbrace \begin{array}{ll} h_t = (1-z_t) \odot h_{t-1} + z_t \odot \tilde{h}_t, \\ \tilde{h}_t = \tau (W_h x_t + r_t \odot (U_h h_{t-1}) + b_h), \\ z_t = \sigma (W_z x_t + U_z h_{t-1} + b_z), \\ r_t = \sigma (W_r x_t + U_r h_{t-1} + b_r), \end{array}\right.}$$ (Eq. 13)
where $\odot $ denotes the Hadamard product, $\sigma $ and $\tau $ denote the activation function of sigmod and tanh respectively, and $W_h$ , $U_h$ , $b_h$ , $W_z$ , $U_z$ , $b_z$ , $W_r$ , $\sigma $0 and $\sigma $1 are parameters to learn. With the two bidirectional RNNs, one forward hidden state and one backward hidden state are calculated for the sequence, denoted as ( $\sigma $2 ) and ( $\sigma $3 ) respectively. They are concatenated as the output of the RNN layer: $\sigma $4 .
We assume different words are differently informative towards the type of the literal. For example, the word “port” is more important than the other words in distinguishing the type Wetland from other concrete types of Place. To this end, an attention layer is further stacked. Given the input from the RNN layer ( $h_t, t \in \left[1,T \right]$ ), the attention layer outputs $h_a = \left[\alpha _t h_t \right], t \in \left[1,T \right]$ , where $\alpha _t$ is the normalized weight of the word at position $t$ and is calculated as
$${\left\lbrace \begin{array}{ll} \alpha _t = \frac{exp(u^T_t u_w)}{\sum _{t \in \left[1,T\right]} exp (u^T_t u_w)} \\ u_t = \tau (W_w h_t + b_w), \end{array}\right.}$$ (Eq. 14)
where $u_w$ , $W_w$ and $b_w$ are parameters to learn. Specifically, $u_w$ denotes the general informative degrees of all the words, while $\alpha _t$ denotes the attention of the word at position $t$ w.r.t. other words in the sequence. Note that the attention weights can also be utilized to justify a prediction. In order to exploit information about the location of a word in the subject, property or literal, we do not calculate the weighted sum of the BiRNN output but concatenate the weighted vectors. The dimension of each RNN hidden state (i.e., $\overleftarrow{h_t}$ and $\overrightarrow{h_t}$ ), denoted as $d_r$ , and the dimension of each attention layer output (i.e., $\alpha _t h_t$ ), denoted as $W_w$0 , are two hyper parameters of the network architecture.
A fully connected (FC) layer and a logistic regression layer are finally stacked for modeling the nonlinear relationship and calculating the output score respectively:
$$ f(s, p, l) = \sigma (W_f h_a + b_f),$$ (Eq. 15)
where $W_f$ and $b_f$ are the parameters to learn, $\sigma $ denotes the sigmod function, and $f$ denotes the function of the whole network.
Sampling and Training
We first extract both particular samples and general samples from the KB using SPARQL queries and reasoning; we then improve sample quality by detecting and repairing wrong and missing entity classifications with the help of external KBs; and finally we train the classifiers.
Particular samples are based on the entities $E_M$ that are lexically matched by the literals. For each literal candidate class $c$ in $C_M$ , its particular samples are generated by:
Extracting its positive particular entities: $E_M^c = \left\lbrace e | e \in E_M, e \text{ is an instance of } c \right\rbrace $ ;
Generating its positive particular samples as
$$\mathcal {P}_c^{+} = \cup _{e \in E_M^c} \left\lbrace \langle s,p,l \rangle | s \in S(p,e), l \in L(e) \right\rbrace ,$$ (Eq. 20)
where $S(p,e)$ denotes the set of entities occurring in the subject position in a triple of the form $\langle s, p, e\rangle $ , and $L(e)$ denotes all the labels (text phrases) of the entity $e$ ;
Extracting its negative particular entities $E_M^{\widetilde{c}}$ as those entities in $E_M$ that are instances of some sibling class of $c$ and not instances of $c$ ;
Generating its negative particular samples $\mathcal {P}_c^-$ with $E_M^{\widetilde{c}}$ using the same approach as for positive samples.
Given that the literal matched candidate classes $C_M$ are only a part of all the candidate classes $C_{PM}$ , and that the size of particular samples may be too small to train the neural network, we additionally generate general samples based on common KB entities. For each candidate class $c$ in $C_{PM}$ , all its entities in the KB, denoted as $E^c$ , are extracted and then its positive general samples, denoted as $\mathcal {G}_c^+$ , are generated from $E^c$ using the same approach as for particular samples. Similarly, entities of the sibling classes of $c$ , denoted as $E^{\widetilde{c}}$ , are extracted, and general negative samples, denoted as $\mathcal {G}_c^-$ , are generated from $C_{PM}$0 . As for negative particular entities, we check each entity in $C_{PM}$1 and remove those that are not instances of $C_{PM}$2 .
Unlike the particular samples, the positive and negative general samples are balanced. This means that we reduce the size of $\mathcal {G}_c^+$ and $\mathcal {G}_c^-$ to the minimum of $\#(\mathcal {G}_c^+)$ , $\#(\mathcal {G}_c^-)$ and $N_0$ , where $\#()$ denotes set cardinality, and $N_0$ is a hyper parameter for sampling. Size reduction is implemented via random sampling.
Many KBs are quite noisy, with wrong or missing entity classifications. For example, when using the SPARQL endpoint of DBpedia, dbr:Scotland is classified as dbo:MusicalArtist instead of as dbo:Country, while dbr:Afghan appears without a type. We have corrected and complemented the sample generation by combining the outputs of more than one KB. For example, the DBpedia endpoint suggestions are compared against Wikidata and the DBpedia lookup service. Most DBpedia entities are mapped to Wikidata entities whose types are used to validate and complement the suggested types from the DBpedia endpoint. In addition, the lookup service, although incomplete, typically provides very precise types that can also confirm the validity of the DBpedia endpoint types. The validation is performed by identifying if the types suggested by one KB are compatible with those returned by other KBs, that is, if the relevant types belong to the same branch of the hierarchy (e.g., the DBpedia taxonomy). With the new entity classifications, the samples are revised accordingly.
We train a binary classifier $f^c$ for each class $c$ in $C_{PM}$ . It is first pre-trained with general samples $\mathcal {G}_{c}^+ \cup \mathcal {G}_{c}^-$ , and then fine tuned with particular samples $\mathcal {P}_{c}^+ \cup \mathcal {P}_{c}^-$ . Pre-training deals with the shortage of particular samples, while fine-tuning bridges the gap between common KB entities and the entities associated with the literals, which is also known as domain adaptation. Given that pre-training is the most time consuming step, but is task agnostic, classifiers for all the classes in a KB could be pre-trained in advance to accelerate a specific literal canonicalization task.
Independent and Hierarchical Typing
In prediction, the binary classifier for class $c$ , denoted as $f^c$ , outputs a score $y_l^c$ indicating the probability that a literal $l$ belongs to class $c$ : $y_l^c = f^c(l)$ , $y_l^c \in \left[0,1\right]$ . With the predicted scores, we adopt two strategies – independent and hierarchical to determine the types. In the independent strategy, the relationship between classes is not considered. A class $c$ is selected as a type of $l$ if its score $y_l^c \ge \theta $ , where $f^c$0 is a threshold hyper parameter in $f^c$1 .
The hierarchical strategy considers the class hierarchy and the disjointness between sibling classes. We first calculate a hierarchical score for each class with the predicted scores of itself and its descendents:
$$s_l^c = max\left\lbrace y_l^{c^{\prime }} | c^{\prime } \sqsubseteq c,\text{ } c^{\prime } \in C_{PM} \right\rbrace ,$$ (Eq. 28)
where $\sqsubseteq $ denotes the subclass relationship between two classes, $C_{PM}$ is the set of candidate classes for $l$ , and $max$ denotes the maximum value of a set. For a candidate class $c^{\prime }$ in $C_{PM}$ , we denote all disjoint candidate classes as $\mathcal {D}(C_{PM}, c^{\prime })$ . They can be defined as sibling classes of both $c^{\prime }$ and its ancestors, or via logical constraints in the KB. A class $c$ is selected as a type of $l$ if (i) its hierarchical score $C_{PM}$0 , and (ii) it satisfies the following soft exclusion condition:
$$s_l^c - max\left\lbrace s_l^{c^{\prime }} | c^{\prime } \in \mathcal {D}(C_{PM}, c) \right\rbrace \ge \kappa ,$$ (Eq. 29)
where $\kappa $ is a relaxation hyper parameter. The exclusion of disjoint classes is hard if $\kappa $ is set to 0, and relaxed if $\kappa $ is set to a negative float with a small absolute value e.g., $-0.1$ .
Finally, for a given literal $l$ , we return the set of all selected classes as its types $\mathcal {C}_l$ .
Canonicalization
Given a literal $l$ , we use $\mathcal {C}_l$ to try to identify an associated entity. A set of candidate entities are first retrieved using the lexical index that is built on the entity's name, label, anchor text, etc. Unlike candidate class extraction, here we use the whole text phrase of the literal, and rank the candidate entities according to their lexical similarities. Those entities that are not instances of any classes in $\mathcal {C}_l$ are then filtered out, and the most similar entity among the remainder is selected as the associated entity for $l$ . If no entities are retrieved, or all the retrieved entities are filtered out, then the literal could be associated with a new entity whose types are those most specific classes in $\mathcal {C}_l$ . In either case we can improve the quality of our results by checking that the resulting entities would be consistent if added to the KB, and discarding any entity associations that would lead to inconsistency.
Experiment Setting
In the experiments, we adopt a real literal set (R-Lite) and a synthetic literal set (S-Lite) , both of which are extracted from DBpedia. R-Lite is based on the property and literal pairs published by Gunaratna et al. in 2016 BIBREF4 . We refine the data by (i) removing literals that no longer exist in the current version of DBpedia; (ii) extracting new literals from DBpedia for properties whose existing literals were all removed in step (i); (iii) extending each property and literal pair with an associated subject; and (iv) manually adding ground truth types selected from classes defined in the DBpedia Ontology (DBO). To fully evaluate the study with more data, we additionally constructed S-Lite from DBpedia by repeatedly: (i) selecting a DBpedia triple of the form $\langle s,p,e \rangle $ , where $e$ is an entity; (ii) replacing $e$ with it's label $l$ to give a triple $\langle s,p,l \rangle $ ; (iii) eliminating the entity $e$ from DBpedia; and (iv) adding as ground truth types the DBpedia classes of which $e$ is (implicitly) an instance. More data details are shown in Table 1 .
In evaluating the typing performance, Precision, Recall and F1 Score are used. For a literal $l$ , the computed types $\mathcal {C}_l$ are compared with the ground truths $\mathcal {C}_l^{gt}$ , and the following micro metrics are calculated: $P_l = {\# (\mathcal {C}_l \cap \mathcal {C}_l^{gt}) }{\# (\mathcal {C}_l)}$ , $R_l = {\# (\mathcal {C}_l \cap \mathcal {C}_l^{gt} )}{\# (\mathcal {C}_l^{gt})}$ , and ${F_1}_l = {(2 \times P_l \times R_l)}{(P_l + R_l)}$ . They are then averaged over all the literals as the final Precision, Recall and F1 Score of a literal set. Although F1 Score measures the overall performance with both Precision and Recall considered, it depends on the threshold hyper parameter $\theta $ as with Precision and Recall. Thus we let $\theta $ range from 0 to 1 with a step of $0.01$ , and calculate the average of all the F1 Scores (AvgF1@all) and top 5 highest F1 Scores (AvgF1@top5). AvgF1@all measures the overall pattern recognition capability, while AvgF1@top5 is relevant in real applications where we often use a validation data set to find a $\theta $ setting that is close to the optimum. We also use the highest (top) Precision in evaluating the sample refinement.
In evaluating entity matching performance, Precision is measured by manually checking whether the identified entity is correct or not. S-Lite is not used for entity matching evaluation as the corresponding entities for all its literals are assumed to be excluded from the KB. We are not able to measure recall for entity matching as we do not have the ground truths; instead, we have evaluated entity matching with different confidence thresholds and compared the number of correct results.
The evaluation includes three aspects. We first compare different settings of the typing framework, analyzing the impacts of sample refinement, fine tuning by particular samples, BiRNN and the attention mechanism. We also compare the independent and hierarchical typing strategies. We then compare the overall typing performance of our framework with (i) Gunaratna et al. BIBREF4 , which matches the literal to both classes and entities; (ii) an entity lookup based method; and (iii) a probabilistic property range estimation method. Finally, we analyze the performance of entity matching with and without the predicted types.
The DBpedia lookup service, which is based on the Spotlight index BIBREF18 , is used for entity lookup (retrieval). The DBpedia SPARQL endpoint is used for query answering and reasoning. The reported results are based on the following settings: the Adam optimizer together with cross-entropy loss are used for network training; $d_r$ and $d_a$ are set to 200 and 50 respectively; $N_0$ is set to 1200; word2vec trained with the latest Wikipedia article dump is adopted for word embedding; and ( $T_s$ , $T_p$ , $T_l$ ) are set to (12, 4, 12) for S-Lite and (12, 4, 15) for R-Lite. The experiments are run on a workstation with Intel(R) Xeon(R) CPU E5-2670 @2.60GHz, with programs implemented by Tensorflow.
Results on Framework Settings
We first evaluate the impact of the neural network architecture, fine tuning and different typing strategies, with their typing results on S-Lite shown in Table 2 and Fig. 3 . Our findings are supported by comparable results on R-Lite. We further evaluate sample refinement, with some statistics of the refinement operations as well as performance improvements shown in Fig. 4 .
According to Table 2 , we find BiRNN significantly outperforms Multiple Layer Perceptron (MLP), a basic but widely used neural network model, while stacking an attention layer (AttBiRNN) further improves AvgF1@all and AvgF1@top5, for example by $3.7\%$ and $3.1\%$ respectively with hierarchical typing ( $\kappa $ = $-0.1$ ). The result is consistent for both pre-trained models and fine tuned models, using both independent and hierarchical typing strategies. This indicates the effectiveness of our neural network architecture. Meanwhile, the performance of all the models is significantly improved after they are fine tuned by the particular samples, as expected. For example, when the independent typing strategy is used, AvgF1@all and AvgF1@top5 of AttBiRNN are improved by $54.1\%$ and $35.2\%$ respectively.
The impact of independent and hierarchical typing strategies is more complex. As shown in Table 2 , when the classifier is weak (e.g., pre-trained BiRNN), hierarchical typing with both hard exclusion ( $\kappa $ = 0) and relaxed exclusion ( $\kappa $ = $-0.1$ ) has higher AvgF1@all and AvgF1@top5 than independent typing. However, when a strong classifier (e.g., fine tuned AttBiRNN) is used, AvgF1@all and AvgF1@top5 of hierarchical typing with relaxed exclusion are close to independent typing, while hierarchical typing with hard exclusion has worse performance. We further analyze Precision, Recall and F1 Score of both typing strategies under varying threshold ( $\theta $ ) values, as shown in Fig. 3 . In comparison with independent typing, hierarchical typing achieves (i) more stable Precision, Recall and F1 Score curves; and (ii) significantly higher Precision, especially when $\theta $ is small. Meanwhile, as with the results in Table 2 , relaxed exclusion outperforms hard exclusion in hierarchical typing except for Precision when $\theta $ is between 0 and $0.05$ .
Fig. 4 [Right] shows the ratio of positive and negative particular samples that are deleted and added during sample refinement. The AttBiRNN classifiers fine tuned by the refined particular samples are compared with those fine tuned by the original particular samples. The improvements on AvgF1@all, AvgF1@top5 and top Precision, which are based on the average of the three above typing settings, are shown in Fig. 4 [Left]. On the one hand, we find sample refinement benefits both S-Lite and R-Lite, as expected. On the other hand, we find the improvement on S-Lite is limited, while the improvement on R-Lite is quite significant: F1@all and top Precision, e.g., are improved by around $0.8\%$ and $1.8\%$ respectively on S-Lite, but $4.3\%$ and $7.4\%$ respectively on R-Lite. This may be due to two factors: (i) the ground truths of S-Lite are the entities' class and super classes inferred from the KB itself, while the ground truths of R-Lite are manually labeled; (ii) sample refinement deletes many more noisy positive and negative samples (which are caused by wrong entity classifications of the KB) on R-Lite than on S-Lite, as shown in Fig. 4 [Right].
Results on Semantic Typing
Table 3 displays the overall semantic typing performance of our method and the baselines. Results for two optimum settings are reported for each method. The baseline Entity-Lookup retrieves one or several entities using the whole phrase of the literal, and uses their classes and super classes as the types. Gunaratna BIBREF4 matches the literal's focus term (head word) to an exact class, then an exact entity, and then a class with the highest similarity score. It stops as soon as some classes or entities are matched. We extend its original “exact entity match" setting with “relaxed entity match" which means multiple entities are retrieved. Property Range Estimation gets the classes and super classes from the entity objects of the property, and calculates the score of each class as the ratio of entity objects that belong to that class. (H/I, $\kappa $ , $\cdot $ )@top-P (F1) denotes the setting where the highest Precision (F1 Score) is achieved.
As we can see, AttBiRNN achieves much higher performance than all three baselines on both S-Lite and R-Lite. For example, the F1 Score of AttBiRNN is $67.6\%$ , $160.2\%$ and $13.8\%$ higher than those of Gunaratna, Entity-Lookup and Property Range Estimation respectively on S-Lite, and $28.5\%$ , $58.3\%$ and $37.9\%$ higher respectively on R-Lite. AttBiRNN also has significantly higher Precision and Recall, even when the setting is adjusted for the highest F1 Score. This is as expected, because our neural network, which learns the semantics (statistical correlation) from both word vector corpus and KB, models and utilizes the contextual meaning of the literal and its associated triple, while Gunaratna and Entity-Lookup are mostly based on lexical similarity. The performance of Property Range Estimation is limited because the object annotation in DBpedia usually does not follow the property range, especially for those properties in R-Lite. For example, objects of the property dbp:office have 35 DBO classes, ranging from dbo:City and dbo:Country to dbo:Company.
It is also notable that AttBiRNN and Property Range Estimation perform better on S-Lite than on R-Lite. The top F1 Score is $20.7\%$ and $46.2\%$ higher respectively, while the top Precision is $11.4\%$ and $43.6\%$ higher respectively. This is because R-Lite is more noisy, with longer literals, and has more ground truth types on average (cf. Table 1 ), while S-Lite has fewer properties, and each property has a large number of entity objects, which significantly benefits Property Range Estimation. In contrast, the two entity matching based methods, Gunaratna and Entity-Lookup, perform worse on S-Lite than on R-Lite; this is because the construction of S-Lite removes those KB entities from which literals were derived. Gunaratna outperforms Entity-Lookup as it extracts the head word and matches it to both entities and classes. Note that the head word is also included in our candidate class extraction with lookup.
Results on Entity Matching
Table 4 displays the number of correct matched entities and the Precision of entity matching on R-Lite. The types are predicted by the fine-tuned AttBiRNN with independent typing and two threshold settings. We can see that Precision is improved when the retrieved entities that do not belong to any of the predicted types are filtered out. The improvement is $6.1\%$ and $5.8\%$ when $\theta $ is set to $0.15$ and $0.01$ respectively. Meanwhile, although the total number of matches may decrease because of the filtering, the number of correct matches still increases from 396 to 404 ( $\theta =0.01$ ). This means that Recall is also improved.
Related Work
Work on KB quality issues can can be divided into KB quality assessment BIBREF2 , BIBREF1 , and KB quality improvement/refinement BIBREF3 . The former includes error and anomaly detection methods, such as test-driven and query template based approaches BIBREF19 , BIBREF20 , with statistical methods BIBREF21 and consistency reasoning BIBREF22 also being applied to assess KB quality with different kinds of metric. The latter includes (i) KB completion, such as entity classification BIBREF7 , BIBREF8 , BIBREF9 , relation prediction BIBREF23 and data typing BIBREF15 ; and (ii) KB diagnosis and repair, such as abnormal value detection BIBREF20 , erroneous identity link detection BIBREF24 and data mapping (e.g., links to Wikipedia pages) correction BIBREF25 .
KB canonicalization refers to those refinement works that deal with redundant and ambiguous KB components as well as poorly expressed knowledge with limited reasoning potential. Some works in open information extraction (IE) BIBREF26 , BIBREF27 , BIBREF28 aim to identify synonymous noun phrases and relation phrases of open KBs which are composed of triple assertions extracted from text without any ontologies. For example, the recently proposed CESI method BIBREF27 utilizes both learned KB embeddings and side information like WordNet to find synonyms via clustering. Other works analyze synonyms for ontological KBs. Abedjan et al. BIBREF29 discovered synonymously used predicates for query expansion on DBpedia. Pujara et al. BIBREF30 identified coreferent entities of NELL with ontological constraints considered. These clustering, embedding, or entity linking based methods in open IE however can not be directly applied or do not work well for our KB literal canonicalization. The utilization of these techniques will be in our future work.
String literals in ontological KBs such as DBpedia often represent poorly expressed knowledge, with semantic types and coreferent entities missed. As far as we known, canonicalization of such literals has been little studied. Gunaratna et al. BIBREF4 typed the literal by matching its head term to ontology classes and KB entities, but the literal context (e.g., the associated subject and property) and semantic meaning of the composition words were not utilized. Some ideas of entity classification can be borrowed for literal typing but will become ineffective as the context differs. For example, the baseline Property Range Estimation in our experiments uses the idea of SDType BIBREF8 — utilizing the statistical distribution of types in the subject position and object position of properties to estimate an entity's type probabilities. As a literal is associated with only one property, such probabilistic estimation becomes inaccurate (cf. results in Table 3 ).
Our literal classification model is in some degree inspired by those natural language understanding and web table annotation works that match external noun phrases to KB types and entities BIBREF14 , BIBREF10 , BIBREF12 using neural networks and semantic embeddings for modeling the contextual semantics. For example, Luo et al. BIBREF10 learned features from the surrounding cells of a target cell to predict its entity association. However the context in those works is very different, i.e., a simple regular structure of rows/columns with limited (table) metadata. In contrast, KBs have a complex irregular structure and rich meta data (the knowledge captured in the KB). Differently from these works, we developed different methods, e.g., candidate class extraction and high quality sampling, to learn the network from the KB with its assertions, terminologies and reasoning capability.
Discussion and Outlook
In this paper we present our study on KB literal canonicalization — an important problem on KB quality that has been little studied. A new technical framework is proposed with neural network and knowledge-based learning. It (i) extracts candidate classes as well as their positive and negative samples from the KB by lookup and query answering, with their quality improved using an external KB; (ii) trains classifiers that can effectively learn a literal's contextual features with BiRNNs and an attention mechanism; (iii) identifies types and matches entity for canonicalization. We use a real data set and a synthetic data set, both extracted from DBpedia, for evaluation. It achieves much higher performance than the baselines that include the state-of-the-art. We discuss below some more subjective observations and possible directions for future work.
Acknowledgments
The work is supported by the AIDA project (U.K. Government's Defence & Security Programme in support of the Alan Turing Institute), the SIRIUS Centre for Scalable Data Access (Research Council of Norway, project 237889), the Royal Society, EPSRC projects DBOnto, $\text{MaSI}^{\text{3}}$ and $\text{ED}^{\text{3}}$ . | SPARQL |
2b61893b22ac190c94c2cb129e86086888347079 | 2b61893b22ac190c94c2cb129e86086888347079_0 | Q: What KB is used in this work?
Text: Introduction
Ontology-based knowledge bases (KBs) like DBpedia BIBREF0 are playing an increasingly important role in domains such knowledge management, data analysis and natural language understanding. Although they are very valuable resources, the usefulness and usability of such KBs is limited by various quality issues BIBREF1 , BIBREF2 , BIBREF3 . One such issue is the use of string literals (both explicitly typed and plain literals) instead of semantically typed entities; for example in the triple $\langle $ River_Thames, passesArea, “Port Meadow, Oxford" $\rangle $ . This weakens the KB as it does not capture the semantics of such literals. If, in contrast, the object of the triple were an entity, then this entity could, e.g., be typed as Wetland and Park, and its location given as Oxford. This problem is pervasive and hence results in a significant loss of information: according to statistics from Gunaratna et al. BIBREF4 in 2016, the DBpedia property dbp:location has over 105,000 unique string literals that could be matched with entities. Besides DBpedia, such literals can also be found in some other KBs from encyclopedias (e.g., zhishi.me BIBREF5 ), in RDF graphs transformed from tabular data (e.g., LinkedGeoData BIBREF6 ), in aligned or evolving KBs, etc.
One possible remedy for this problem is to apply automated semantic typing and entity matching (AKA canonicalization) to such literals. To the best of our knowledge, semantic typing of KB literals has rarely been studied. Gunaratna et al. BIBREF4 used semantic typing in their entity summarization method, first identifying the so called focus term of a phrase via grammatical structure analysis, and then matching the focus term with both KB types and entities. Their method is, however, rather simplistic: it neither utilizes the literal's context, such as the associated property and subject, nor captures the contextual meaning of the relevant words. What has been widely studied is the semantic annotation of KB entities BIBREF7 , BIBREF8 , BIBREF9 and of noun phrases outside the KB (e.g., from web tables) BIBREF10 , BIBREF11 , BIBREF12 ; in such cases, however, the context is very different, and entity typing can, for example, exploit structured information such as the entity's linked Wikipedia page BIBREF7 and the domain and range of properties that the entity is associated with BIBREF8 .
With the development of deep learning, semantic embedding and feature learning have been widely adopted for exploring different kinds of contextual semantics in prediction, with Recurrent Neural Network (RNN) being a state-of-the-art method for dealing with structured data and text. One well known example is word2vec — an RNN language model which can represent words in a vector space that retains their meaning BIBREF13 . Another example is a recent study by Kartsaklis et al. BIBREF14 , which maps text to KB entities with a Long-short Term Memory RNN for textual feature learning. These methods offer the potential for developing accurate prediction-based methods for KB literal typing and entity matching where the contextual semantics is fully exploited.
In this study, we investigate KB literal canonicalization using a combination of RNN-based learning and semantic technologies. We first predict the semantic types of a literal by: (i) identifying candidate classes via lexical entity matching and KB queries; (ii) automatically generating positive and negative examples via KB sampling, with external semantics (e.g., from other KBs) injected for improved quality; (iii) training classifiers using relevant subject-predicate-literal triples embedded in an attentive bidirectional RNN (AttBiRNN); and (iv) using the trained classifiers and KB class hierarchy to predict candidate types. The novelty of our framework lies in its knowledge-based learning; this includes automatic candidate class extraction and sampling from the KB, triple embedding with different importance degrees suggesting different semantics, and using the predicted types to identify a potential canonical entity from the KB. We have evaluated our framework using a synthetic literal set (S-Lite) and a real literal set (R-Lite) from DBpedia BIBREF0 . The results are very promising, with significant improvements over several baselines, including the existing state-of-the-art.
Problem Statement
In this study we consider a knowledge base (KB) that includes both ontological axioms that induce (at least) a hierarchy of semantic types (i.e., classes), and assertions that describe concrete entities (individuals). Each such assertion is assumed to be in the form of an RDF triple $\langle s,p,o \rangle $ , where $s$ is an entity, $p$ is a property and $o$ can be either an entity or a literal (i.e., a typed or untyped data value such as a string or integer).
We focus on triples of the form $\langle s,p,l \rangle $ , where $l$ is a string literal; such literals can be identified by regular expressions, as in BIBREF4 , or by data type inference as in BIBREF15 . Our aim is to cononicalize $l$ by first identifying the type of $l$ , i.e., a set of classes $\mathcal {C}_l$ that an entity corresponding to $l$ should be an instance of, and then determining if such an entity already exists in the KB. The first subtask is modeled as a machine learning classification problem where a real value score in $\left[0,1\right]$ is assigned to each class $c$ occurring in the KB, and $\mathcal {C}_l$ is the set of classes determined by the assigned score with strategies e.g., adopting a class if its score exceeds some threshold. The second subtask is modeled as an entity lookup problem constrained by $\mathcal {C}_l$ .
It is important to note that:
When we talk about a literal $l$ we mean the occurrence of $l$ in a triple $\langle s,p,l \rangle $ . Lexically equivalent literals might be treated very differently depending on their triple contexts.
If the KB is an OWL DL ontology, then the set of object properties (which connect two entities) and data properties (which connect an entity to a literal) should be disjoint. In practice, however, KBs such as DBpedia often don't respect this constraint. In any case, we avoid the issue by simply computing the relevant typing and canonicalization information, and leaving it up to applications as to how they want to exploit it.
We assume that no manual annotations or external labels are given — the classifier is automatically trained using the KB.
Technical Framework
The technical framework for the classification problem is shown in Fig. 1 . It involves three main steps: (i) candidate class extraction; (ii) model training and prediction; and (iii) literal typing and canonicalization.
Popular KBs like DBpedia often contain a large number of classes. For efficiency reasons, and to reduce noise in the learning process, we first identify a subset of candidate classes. This selection should be rather inclusive so as to maximize potential recall. In order to achieve this we pool the candidate classes for all literals occurring in triples with a given property; i.e., to compute the candidate classes for a literal $ł$ occurring in a triple $\langle s,p,l \rangle $ , we consider all triples that use property $p$ . Note that, as discussed above, in practice such triples may include both literals and entities as their objects. We thus use two techniques for identifying candidate classes from the given set of triples. In the case where the object of the triple is an entity, the candidates are just the set of classes that this entity is an instance of. In practice we identify the candidates for the set of all such entities, which we denote $E_P$ , via a SPARQL query to the KB, with the resulting set of classes being denoted $C_P$ . In the case where the object of the triple is a literal, we first match the literal to entities using a lexical index which is built based on the entity's name, labels and anchor text (description). To maximize recall, the literal, its tokens (words) and its sub-phrases are used to retrieve entities by lexical matching; this technique is particularly effective when the literal is a long phrase. As in the first case, we identify all relevant entities, which we denote $E_M$ , and then retrieve the relevant classes $C_M$ using a SPARQL query. The candidate class set is simply the union of $C_P$ and $C_M$ , denoted as $C_{PM}$ .
We adopt the strategy of training one binary classifier for each candidate class, instead of multi-class classification, so as to facilitate dealing with the class hierarchy BIBREF16 . The classifier architecture includes an input layer with word embedding, an encoding layer with bidirectional RNNs, an attention layer and a fully connected (FC) layer for modeling the contextual semantics of the literal. To train a classifier, both positive and negative entities (samples), including those from $E_M$ (particular samples) and those outside $E_M$ (general samples) are extracted from the KB, with external KBs and logical constraints being used to improve sample quality. The trained classifiers are used to compute a score for each candidate class.
The final stage is to semantically type and, where possible, canonicalise literals. For a given literal, two strategies, independent and hierarchical, are used to determine its types (classes), with a score for each type. We then use these types and scores to try to identify an entity in the KB that could reasonably be substituted for the literal.
Prediction Model
Given a phrase literal $l$ and its associated RDF triple $\langle s, p, l \rangle $ , our neural network model aims at utilizing the semantics of $s$ , $p$ and $l$ for the classification of $l$ . The architecture is shown in Fig. 2 . It first separately parses the subject label, the property label and the literal into three word (token) sequences whose lengths, denoted as $T_s$ , $T_p$ and $T_l$ , are fixed to the maximum subject, property and literal sequence lengths from the training data by padding shorter sequences with null words. We then concatenate the three sequences into a single word sequence ( $word_t, t \in \left[1,T\right]$ ), where $\langle s, p, l \rangle $0 . Each word is then encoded into a vector via word embedding (null is encoded into a zero vector), and the word sequence is transformed into a vector sequence ( $\langle s, p, l \rangle $1 ). Note that this preserves information about the position of words in $\langle s, p, l \rangle $2 , $\langle s, p, l \rangle $3 and $\langle s, p, l \rangle $4 .
The semantics of forward and backward surrounding words is effective in predicting a word's semantics. For example, “Port” and “Meadow” are more likely to indicate a place as they appear after “Area” and before “Oxford”. To embed such contextual semantics into a feature vector, we stack a layer composed of bidirectional Recurrent Neural Networks (BiRNNs) with Gated Recurrent Unit (GRU) BIBREF17 . Within each RNN, a reset gate $r_t$ is used to control the contribution of the past word, and an update gate $z_t$ is used to balance the contributions of the past words and the new words. The hidden state (embedding) at position $t$ is computed as
$${\left\lbrace \begin{array}{ll} h_t = (1-z_t) \odot h_{t-1} + z_t \odot \tilde{h}_t, \\ \tilde{h}_t = \tau (W_h x_t + r_t \odot (U_h h_{t-1}) + b_h), \\ z_t = \sigma (W_z x_t + U_z h_{t-1} + b_z), \\ r_t = \sigma (W_r x_t + U_r h_{t-1} + b_r), \end{array}\right.}$$ (Eq. 13)
where $\odot $ denotes the Hadamard product, $\sigma $ and $\tau $ denote the activation function of sigmod and tanh respectively, and $W_h$ , $U_h$ , $b_h$ , $W_z$ , $U_z$ , $b_z$ , $W_r$ , $\sigma $0 and $\sigma $1 are parameters to learn. With the two bidirectional RNNs, one forward hidden state and one backward hidden state are calculated for the sequence, denoted as ( $\sigma $2 ) and ( $\sigma $3 ) respectively. They are concatenated as the output of the RNN layer: $\sigma $4 .
We assume different words are differently informative towards the type of the literal. For example, the word “port” is more important than the other words in distinguishing the type Wetland from other concrete types of Place. To this end, an attention layer is further stacked. Given the input from the RNN layer ( $h_t, t \in \left[1,T \right]$ ), the attention layer outputs $h_a = \left[\alpha _t h_t \right], t \in \left[1,T \right]$ , where $\alpha _t$ is the normalized weight of the word at position $t$ and is calculated as
$${\left\lbrace \begin{array}{ll} \alpha _t = \frac{exp(u^T_t u_w)}{\sum _{t \in \left[1,T\right]} exp (u^T_t u_w)} \\ u_t = \tau (W_w h_t + b_w), \end{array}\right.}$$ (Eq. 14)
where $u_w$ , $W_w$ and $b_w$ are parameters to learn. Specifically, $u_w$ denotes the general informative degrees of all the words, while $\alpha _t$ denotes the attention of the word at position $t$ w.r.t. other words in the sequence. Note that the attention weights can also be utilized to justify a prediction. In order to exploit information about the location of a word in the subject, property or literal, we do not calculate the weighted sum of the BiRNN output but concatenate the weighted vectors. The dimension of each RNN hidden state (i.e., $\overleftarrow{h_t}$ and $\overrightarrow{h_t}$ ), denoted as $d_r$ , and the dimension of each attention layer output (i.e., $\alpha _t h_t$ ), denoted as $W_w$0 , are two hyper parameters of the network architecture.
A fully connected (FC) layer and a logistic regression layer are finally stacked for modeling the nonlinear relationship and calculating the output score respectively:
$$ f(s, p, l) = \sigma (W_f h_a + b_f),$$ (Eq. 15)
where $W_f$ and $b_f$ are the parameters to learn, $\sigma $ denotes the sigmod function, and $f$ denotes the function of the whole network.
Sampling and Training
We first extract both particular samples and general samples from the KB using SPARQL queries and reasoning; we then improve sample quality by detecting and repairing wrong and missing entity classifications with the help of external KBs; and finally we train the classifiers.
Particular samples are based on the entities $E_M$ that are lexically matched by the literals. For each literal candidate class $c$ in $C_M$ , its particular samples are generated by:
Extracting its positive particular entities: $E_M^c = \left\lbrace e | e \in E_M, e \text{ is an instance of } c \right\rbrace $ ;
Generating its positive particular samples as
$$\mathcal {P}_c^{+} = \cup _{e \in E_M^c} \left\lbrace \langle s,p,l \rangle | s \in S(p,e), l \in L(e) \right\rbrace ,$$ (Eq. 20)
where $S(p,e)$ denotes the set of entities occurring in the subject position in a triple of the form $\langle s, p, e\rangle $ , and $L(e)$ denotes all the labels (text phrases) of the entity $e$ ;
Extracting its negative particular entities $E_M^{\widetilde{c}}$ as those entities in $E_M$ that are instances of some sibling class of $c$ and not instances of $c$ ;
Generating its negative particular samples $\mathcal {P}_c^-$ with $E_M^{\widetilde{c}}$ using the same approach as for positive samples.
Given that the literal matched candidate classes $C_M$ are only a part of all the candidate classes $C_{PM}$ , and that the size of particular samples may be too small to train the neural network, we additionally generate general samples based on common KB entities. For each candidate class $c$ in $C_{PM}$ , all its entities in the KB, denoted as $E^c$ , are extracted and then its positive general samples, denoted as $\mathcal {G}_c^+$ , are generated from $E^c$ using the same approach as for particular samples. Similarly, entities of the sibling classes of $c$ , denoted as $E^{\widetilde{c}}$ , are extracted, and general negative samples, denoted as $\mathcal {G}_c^-$ , are generated from $C_{PM}$0 . As for negative particular entities, we check each entity in $C_{PM}$1 and remove those that are not instances of $C_{PM}$2 .
Unlike the particular samples, the positive and negative general samples are balanced. This means that we reduce the size of $\mathcal {G}_c^+$ and $\mathcal {G}_c^-$ to the minimum of $\#(\mathcal {G}_c^+)$ , $\#(\mathcal {G}_c^-)$ and $N_0$ , where $\#()$ denotes set cardinality, and $N_0$ is a hyper parameter for sampling. Size reduction is implemented via random sampling.
Many KBs are quite noisy, with wrong or missing entity classifications. For example, when using the SPARQL endpoint of DBpedia, dbr:Scotland is classified as dbo:MusicalArtist instead of as dbo:Country, while dbr:Afghan appears without a type. We have corrected and complemented the sample generation by combining the outputs of more than one KB. For example, the DBpedia endpoint suggestions are compared against Wikidata and the DBpedia lookup service. Most DBpedia entities are mapped to Wikidata entities whose types are used to validate and complement the suggested types from the DBpedia endpoint. In addition, the lookup service, although incomplete, typically provides very precise types that can also confirm the validity of the DBpedia endpoint types. The validation is performed by identifying if the types suggested by one KB are compatible with those returned by other KBs, that is, if the relevant types belong to the same branch of the hierarchy (e.g., the DBpedia taxonomy). With the new entity classifications, the samples are revised accordingly.
We train a binary classifier $f^c$ for each class $c$ in $C_{PM}$ . It is first pre-trained with general samples $\mathcal {G}_{c}^+ \cup \mathcal {G}_{c}^-$ , and then fine tuned with particular samples $\mathcal {P}_{c}^+ \cup \mathcal {P}_{c}^-$ . Pre-training deals with the shortage of particular samples, while fine-tuning bridges the gap between common KB entities and the entities associated with the literals, which is also known as domain adaptation. Given that pre-training is the most time consuming step, but is task agnostic, classifiers for all the classes in a KB could be pre-trained in advance to accelerate a specific literal canonicalization task.
Independent and Hierarchical Typing
In prediction, the binary classifier for class $c$ , denoted as $f^c$ , outputs a score $y_l^c$ indicating the probability that a literal $l$ belongs to class $c$ : $y_l^c = f^c(l)$ , $y_l^c \in \left[0,1\right]$ . With the predicted scores, we adopt two strategies – independent and hierarchical to determine the types. In the independent strategy, the relationship between classes is not considered. A class $c$ is selected as a type of $l$ if its score $y_l^c \ge \theta $ , where $f^c$0 is a threshold hyper parameter in $f^c$1 .
The hierarchical strategy considers the class hierarchy and the disjointness between sibling classes. We first calculate a hierarchical score for each class with the predicted scores of itself and its descendents:
$$s_l^c = max\left\lbrace y_l^{c^{\prime }} | c^{\prime } \sqsubseteq c,\text{ } c^{\prime } \in C_{PM} \right\rbrace ,$$ (Eq. 28)
where $\sqsubseteq $ denotes the subclass relationship between two classes, $C_{PM}$ is the set of candidate classes for $l$ , and $max$ denotes the maximum value of a set. For a candidate class $c^{\prime }$ in $C_{PM}$ , we denote all disjoint candidate classes as $\mathcal {D}(C_{PM}, c^{\prime })$ . They can be defined as sibling classes of both $c^{\prime }$ and its ancestors, or via logical constraints in the KB. A class $c$ is selected as a type of $l$ if (i) its hierarchical score $C_{PM}$0 , and (ii) it satisfies the following soft exclusion condition:
$$s_l^c - max\left\lbrace s_l^{c^{\prime }} | c^{\prime } \in \mathcal {D}(C_{PM}, c) \right\rbrace \ge \kappa ,$$ (Eq. 29)
where $\kappa $ is a relaxation hyper parameter. The exclusion of disjoint classes is hard if $\kappa $ is set to 0, and relaxed if $\kappa $ is set to a negative float with a small absolute value e.g., $-0.1$ .
Finally, for a given literal $l$ , we return the set of all selected classes as its types $\mathcal {C}_l$ .
Canonicalization
Given a literal $l$ , we use $\mathcal {C}_l$ to try to identify an associated entity. A set of candidate entities are first retrieved using the lexical index that is built on the entity's name, label, anchor text, etc. Unlike candidate class extraction, here we use the whole text phrase of the literal, and rank the candidate entities according to their lexical similarities. Those entities that are not instances of any classes in $\mathcal {C}_l$ are then filtered out, and the most similar entity among the remainder is selected as the associated entity for $l$ . If no entities are retrieved, or all the retrieved entities are filtered out, then the literal could be associated with a new entity whose types are those most specific classes in $\mathcal {C}_l$ . In either case we can improve the quality of our results by checking that the resulting entities would be consistent if added to the KB, and discarding any entity associations that would lead to inconsistency.
Experiment Setting
In the experiments, we adopt a real literal set (R-Lite) and a synthetic literal set (S-Lite) , both of which are extracted from DBpedia. R-Lite is based on the property and literal pairs published by Gunaratna et al. in 2016 BIBREF4 . We refine the data by (i) removing literals that no longer exist in the current version of DBpedia; (ii) extracting new literals from DBpedia for properties whose existing literals were all removed in step (i); (iii) extending each property and literal pair with an associated subject; and (iv) manually adding ground truth types selected from classes defined in the DBpedia Ontology (DBO). To fully evaluate the study with more data, we additionally constructed S-Lite from DBpedia by repeatedly: (i) selecting a DBpedia triple of the form $\langle s,p,e \rangle $ , where $e$ is an entity; (ii) replacing $e$ with it's label $l$ to give a triple $\langle s,p,l \rangle $ ; (iii) eliminating the entity $e$ from DBpedia; and (iv) adding as ground truth types the DBpedia classes of which $e$ is (implicitly) an instance. More data details are shown in Table 1 .
In evaluating the typing performance, Precision, Recall and F1 Score are used. For a literal $l$ , the computed types $\mathcal {C}_l$ are compared with the ground truths $\mathcal {C}_l^{gt}$ , and the following micro metrics are calculated: $P_l = {\# (\mathcal {C}_l \cap \mathcal {C}_l^{gt}) }{\# (\mathcal {C}_l)}$ , $R_l = {\# (\mathcal {C}_l \cap \mathcal {C}_l^{gt} )}{\# (\mathcal {C}_l^{gt})}$ , and ${F_1}_l = {(2 \times P_l \times R_l)}{(P_l + R_l)}$ . They are then averaged over all the literals as the final Precision, Recall and F1 Score of a literal set. Although F1 Score measures the overall performance with both Precision and Recall considered, it depends on the threshold hyper parameter $\theta $ as with Precision and Recall. Thus we let $\theta $ range from 0 to 1 with a step of $0.01$ , and calculate the average of all the F1 Scores (AvgF1@all) and top 5 highest F1 Scores (AvgF1@top5). AvgF1@all measures the overall pattern recognition capability, while AvgF1@top5 is relevant in real applications where we often use a validation data set to find a $\theta $ setting that is close to the optimum. We also use the highest (top) Precision in evaluating the sample refinement.
In evaluating entity matching performance, Precision is measured by manually checking whether the identified entity is correct or not. S-Lite is not used for entity matching evaluation as the corresponding entities for all its literals are assumed to be excluded from the KB. We are not able to measure recall for entity matching as we do not have the ground truths; instead, we have evaluated entity matching with different confidence thresholds and compared the number of correct results.
The evaluation includes three aspects. We first compare different settings of the typing framework, analyzing the impacts of sample refinement, fine tuning by particular samples, BiRNN and the attention mechanism. We also compare the independent and hierarchical typing strategies. We then compare the overall typing performance of our framework with (i) Gunaratna et al. BIBREF4 , which matches the literal to both classes and entities; (ii) an entity lookup based method; and (iii) a probabilistic property range estimation method. Finally, we analyze the performance of entity matching with and without the predicted types.
The DBpedia lookup service, which is based on the Spotlight index BIBREF18 , is used for entity lookup (retrieval). The DBpedia SPARQL endpoint is used for query answering and reasoning. The reported results are based on the following settings: the Adam optimizer together with cross-entropy loss are used for network training; $d_r$ and $d_a$ are set to 200 and 50 respectively; $N_0$ is set to 1200; word2vec trained with the latest Wikipedia article dump is adopted for word embedding; and ( $T_s$ , $T_p$ , $T_l$ ) are set to (12, 4, 12) for S-Lite and (12, 4, 15) for R-Lite. The experiments are run on a workstation with Intel(R) Xeon(R) CPU E5-2670 @2.60GHz, with programs implemented by Tensorflow.
Results on Framework Settings
We first evaluate the impact of the neural network architecture, fine tuning and different typing strategies, with their typing results on S-Lite shown in Table 2 and Fig. 3 . Our findings are supported by comparable results on R-Lite. We further evaluate sample refinement, with some statistics of the refinement operations as well as performance improvements shown in Fig. 4 .
According to Table 2 , we find BiRNN significantly outperforms Multiple Layer Perceptron (MLP), a basic but widely used neural network model, while stacking an attention layer (AttBiRNN) further improves AvgF1@all and AvgF1@top5, for example by $3.7\%$ and $3.1\%$ respectively with hierarchical typing ( $\kappa $ = $-0.1$ ). The result is consistent for both pre-trained models and fine tuned models, using both independent and hierarchical typing strategies. This indicates the effectiveness of our neural network architecture. Meanwhile, the performance of all the models is significantly improved after they are fine tuned by the particular samples, as expected. For example, when the independent typing strategy is used, AvgF1@all and AvgF1@top5 of AttBiRNN are improved by $54.1\%$ and $35.2\%$ respectively.
The impact of independent and hierarchical typing strategies is more complex. As shown in Table 2 , when the classifier is weak (e.g., pre-trained BiRNN), hierarchical typing with both hard exclusion ( $\kappa $ = 0) and relaxed exclusion ( $\kappa $ = $-0.1$ ) has higher AvgF1@all and AvgF1@top5 than independent typing. However, when a strong classifier (e.g., fine tuned AttBiRNN) is used, AvgF1@all and AvgF1@top5 of hierarchical typing with relaxed exclusion are close to independent typing, while hierarchical typing with hard exclusion has worse performance. We further analyze Precision, Recall and F1 Score of both typing strategies under varying threshold ( $\theta $ ) values, as shown in Fig. 3 . In comparison with independent typing, hierarchical typing achieves (i) more stable Precision, Recall and F1 Score curves; and (ii) significantly higher Precision, especially when $\theta $ is small. Meanwhile, as with the results in Table 2 , relaxed exclusion outperforms hard exclusion in hierarchical typing except for Precision when $\theta $ is between 0 and $0.05$ .
Fig. 4 [Right] shows the ratio of positive and negative particular samples that are deleted and added during sample refinement. The AttBiRNN classifiers fine tuned by the refined particular samples are compared with those fine tuned by the original particular samples. The improvements on AvgF1@all, AvgF1@top5 and top Precision, which are based on the average of the three above typing settings, are shown in Fig. 4 [Left]. On the one hand, we find sample refinement benefits both S-Lite and R-Lite, as expected. On the other hand, we find the improvement on S-Lite is limited, while the improvement on R-Lite is quite significant: F1@all and top Precision, e.g., are improved by around $0.8\%$ and $1.8\%$ respectively on S-Lite, but $4.3\%$ and $7.4\%$ respectively on R-Lite. This may be due to two factors: (i) the ground truths of S-Lite are the entities' class and super classes inferred from the KB itself, while the ground truths of R-Lite are manually labeled; (ii) sample refinement deletes many more noisy positive and negative samples (which are caused by wrong entity classifications of the KB) on R-Lite than on S-Lite, as shown in Fig. 4 [Right].
Results on Semantic Typing
Table 3 displays the overall semantic typing performance of our method and the baselines. Results for two optimum settings are reported for each method. The baseline Entity-Lookup retrieves one or several entities using the whole phrase of the literal, and uses their classes and super classes as the types. Gunaratna BIBREF4 matches the literal's focus term (head word) to an exact class, then an exact entity, and then a class with the highest similarity score. It stops as soon as some classes or entities are matched. We extend its original “exact entity match" setting with “relaxed entity match" which means multiple entities are retrieved. Property Range Estimation gets the classes and super classes from the entity objects of the property, and calculates the score of each class as the ratio of entity objects that belong to that class. (H/I, $\kappa $ , $\cdot $ )@top-P (F1) denotes the setting where the highest Precision (F1 Score) is achieved.
As we can see, AttBiRNN achieves much higher performance than all three baselines on both S-Lite and R-Lite. For example, the F1 Score of AttBiRNN is $67.6\%$ , $160.2\%$ and $13.8\%$ higher than those of Gunaratna, Entity-Lookup and Property Range Estimation respectively on S-Lite, and $28.5\%$ , $58.3\%$ and $37.9\%$ higher respectively on R-Lite. AttBiRNN also has significantly higher Precision and Recall, even when the setting is adjusted for the highest F1 Score. This is as expected, because our neural network, which learns the semantics (statistical correlation) from both word vector corpus and KB, models and utilizes the contextual meaning of the literal and its associated triple, while Gunaratna and Entity-Lookup are mostly based on lexical similarity. The performance of Property Range Estimation is limited because the object annotation in DBpedia usually does not follow the property range, especially for those properties in R-Lite. For example, objects of the property dbp:office have 35 DBO classes, ranging from dbo:City and dbo:Country to dbo:Company.
It is also notable that AttBiRNN and Property Range Estimation perform better on S-Lite than on R-Lite. The top F1 Score is $20.7\%$ and $46.2\%$ higher respectively, while the top Precision is $11.4\%$ and $43.6\%$ higher respectively. This is because R-Lite is more noisy, with longer literals, and has more ground truth types on average (cf. Table 1 ), while S-Lite has fewer properties, and each property has a large number of entity objects, which significantly benefits Property Range Estimation. In contrast, the two entity matching based methods, Gunaratna and Entity-Lookup, perform worse on S-Lite than on R-Lite; this is because the construction of S-Lite removes those KB entities from which literals were derived. Gunaratna outperforms Entity-Lookup as it extracts the head word and matches it to both entities and classes. Note that the head word is also included in our candidate class extraction with lookup.
Results on Entity Matching
Table 4 displays the number of correct matched entities and the Precision of entity matching on R-Lite. The types are predicted by the fine-tuned AttBiRNN with independent typing and two threshold settings. We can see that Precision is improved when the retrieved entities that do not belong to any of the predicted types are filtered out. The improvement is $6.1\%$ and $5.8\%$ when $\theta $ is set to $0.15$ and $0.01$ respectively. Meanwhile, although the total number of matches may decrease because of the filtering, the number of correct matches still increases from 396 to 404 ( $\theta =0.01$ ). This means that Recall is also improved.
Related Work
Work on KB quality issues can can be divided into KB quality assessment BIBREF2 , BIBREF1 , and KB quality improvement/refinement BIBREF3 . The former includes error and anomaly detection methods, such as test-driven and query template based approaches BIBREF19 , BIBREF20 , with statistical methods BIBREF21 and consistency reasoning BIBREF22 also being applied to assess KB quality with different kinds of metric. The latter includes (i) KB completion, such as entity classification BIBREF7 , BIBREF8 , BIBREF9 , relation prediction BIBREF23 and data typing BIBREF15 ; and (ii) KB diagnosis and repair, such as abnormal value detection BIBREF20 , erroneous identity link detection BIBREF24 and data mapping (e.g., links to Wikipedia pages) correction BIBREF25 .
KB canonicalization refers to those refinement works that deal with redundant and ambiguous KB components as well as poorly expressed knowledge with limited reasoning potential. Some works in open information extraction (IE) BIBREF26 , BIBREF27 , BIBREF28 aim to identify synonymous noun phrases and relation phrases of open KBs which are composed of triple assertions extracted from text without any ontologies. For example, the recently proposed CESI method BIBREF27 utilizes both learned KB embeddings and side information like WordNet to find synonyms via clustering. Other works analyze synonyms for ontological KBs. Abedjan et al. BIBREF29 discovered synonymously used predicates for query expansion on DBpedia. Pujara et al. BIBREF30 identified coreferent entities of NELL with ontological constraints considered. These clustering, embedding, or entity linking based methods in open IE however can not be directly applied or do not work well for our KB literal canonicalization. The utilization of these techniques will be in our future work.
String literals in ontological KBs such as DBpedia often represent poorly expressed knowledge, with semantic types and coreferent entities missed. As far as we known, canonicalization of such literals has been little studied. Gunaratna et al. BIBREF4 typed the literal by matching its head term to ontology classes and KB entities, but the literal context (e.g., the associated subject and property) and semantic meaning of the composition words were not utilized. Some ideas of entity classification can be borrowed for literal typing but will become ineffective as the context differs. For example, the baseline Property Range Estimation in our experiments uses the idea of SDType BIBREF8 — utilizing the statistical distribution of types in the subject position and object position of properties to estimate an entity's type probabilities. As a literal is associated with only one property, such probabilistic estimation becomes inaccurate (cf. results in Table 3 ).
Our literal classification model is in some degree inspired by those natural language understanding and web table annotation works that match external noun phrases to KB types and entities BIBREF14 , BIBREF10 , BIBREF12 using neural networks and semantic embeddings for modeling the contextual semantics. For example, Luo et al. BIBREF10 learned features from the surrounding cells of a target cell to predict its entity association. However the context in those works is very different, i.e., a simple regular structure of rows/columns with limited (table) metadata. In contrast, KBs have a complex irregular structure and rich meta data (the knowledge captured in the KB). Differently from these works, we developed different methods, e.g., candidate class extraction and high quality sampling, to learn the network from the KB with its assertions, terminologies and reasoning capability.
Discussion and Outlook
In this paper we present our study on KB literal canonicalization — an important problem on KB quality that has been little studied. A new technical framework is proposed with neural network and knowledge-based learning. It (i) extracts candidate classes as well as their positive and negative samples from the KB by lookup and query answering, with their quality improved using an external KB; (ii) trains classifiers that can effectively learn a literal's contextual features with BiRNNs and an attention mechanism; (iii) identifies types and matches entity for canonicalization. We use a real data set and a synthetic data set, both extracted from DBpedia, for evaluation. It achieves much higher performance than the baselines that include the state-of-the-art. We discuss below some more subjective observations and possible directions for future work.
Acknowledgments
The work is supported by the AIDA project (U.K. Government's Defence & Security Programme in support of the Alan Turing Institute), the SIRIUS Centre for Scalable Data Access (Research Council of Norway, project 237889), the Royal Society, EPSRC projects DBOnto, $\text{MaSI}^{\text{3}}$ and $\text{ED}^{\text{3}}$ . | DBpedia |
a996b6aee9be88a3db3f4127f9f77a18ed10caba | a996b6aee9be88a3db3f4127f9f77a18ed10caba_0 | Q: What's the precision of the system?
Text: Introduction
Ontology-based knowledge bases (KBs) like DBpedia BIBREF0 are playing an increasingly important role in domains such knowledge management, data analysis and natural language understanding. Although they are very valuable resources, the usefulness and usability of such KBs is limited by various quality issues BIBREF1 , BIBREF2 , BIBREF3 . One such issue is the use of string literals (both explicitly typed and plain literals) instead of semantically typed entities; for example in the triple $\langle $ River_Thames, passesArea, “Port Meadow, Oxford" $\rangle $ . This weakens the KB as it does not capture the semantics of such literals. If, in contrast, the object of the triple were an entity, then this entity could, e.g., be typed as Wetland and Park, and its location given as Oxford. This problem is pervasive and hence results in a significant loss of information: according to statistics from Gunaratna et al. BIBREF4 in 2016, the DBpedia property dbp:location has over 105,000 unique string literals that could be matched with entities. Besides DBpedia, such literals can also be found in some other KBs from encyclopedias (e.g., zhishi.me BIBREF5 ), in RDF graphs transformed from tabular data (e.g., LinkedGeoData BIBREF6 ), in aligned or evolving KBs, etc.
One possible remedy for this problem is to apply automated semantic typing and entity matching (AKA canonicalization) to such literals. To the best of our knowledge, semantic typing of KB literals has rarely been studied. Gunaratna et al. BIBREF4 used semantic typing in their entity summarization method, first identifying the so called focus term of a phrase via grammatical structure analysis, and then matching the focus term with both KB types and entities. Their method is, however, rather simplistic: it neither utilizes the literal's context, such as the associated property and subject, nor captures the contextual meaning of the relevant words. What has been widely studied is the semantic annotation of KB entities BIBREF7 , BIBREF8 , BIBREF9 and of noun phrases outside the KB (e.g., from web tables) BIBREF10 , BIBREF11 , BIBREF12 ; in such cases, however, the context is very different, and entity typing can, for example, exploit structured information such as the entity's linked Wikipedia page BIBREF7 and the domain and range of properties that the entity is associated with BIBREF8 .
With the development of deep learning, semantic embedding and feature learning have been widely adopted for exploring different kinds of contextual semantics in prediction, with Recurrent Neural Network (RNN) being a state-of-the-art method for dealing with structured data and text. One well known example is word2vec — an RNN language model which can represent words in a vector space that retains their meaning BIBREF13 . Another example is a recent study by Kartsaklis et al. BIBREF14 , which maps text to KB entities with a Long-short Term Memory RNN for textual feature learning. These methods offer the potential for developing accurate prediction-based methods for KB literal typing and entity matching where the contextual semantics is fully exploited.
In this study, we investigate KB literal canonicalization using a combination of RNN-based learning and semantic technologies. We first predict the semantic types of a literal by: (i) identifying candidate classes via lexical entity matching and KB queries; (ii) automatically generating positive and negative examples via KB sampling, with external semantics (e.g., from other KBs) injected for improved quality; (iii) training classifiers using relevant subject-predicate-literal triples embedded in an attentive bidirectional RNN (AttBiRNN); and (iv) using the trained classifiers and KB class hierarchy to predict candidate types. The novelty of our framework lies in its knowledge-based learning; this includes automatic candidate class extraction and sampling from the KB, triple embedding with different importance degrees suggesting different semantics, and using the predicted types to identify a potential canonical entity from the KB. We have evaluated our framework using a synthetic literal set (S-Lite) and a real literal set (R-Lite) from DBpedia BIBREF0 . The results are very promising, with significant improvements over several baselines, including the existing state-of-the-art.
Problem Statement
In this study we consider a knowledge base (KB) that includes both ontological axioms that induce (at least) a hierarchy of semantic types (i.e., classes), and assertions that describe concrete entities (individuals). Each such assertion is assumed to be in the form of an RDF triple $\langle s,p,o \rangle $ , where $s$ is an entity, $p$ is a property and $o$ can be either an entity or a literal (i.e., a typed or untyped data value such as a string or integer).
We focus on triples of the form $\langle s,p,l \rangle $ , where $l$ is a string literal; such literals can be identified by regular expressions, as in BIBREF4 , or by data type inference as in BIBREF15 . Our aim is to cononicalize $l$ by first identifying the type of $l$ , i.e., a set of classes $\mathcal {C}_l$ that an entity corresponding to $l$ should be an instance of, and then determining if such an entity already exists in the KB. The first subtask is modeled as a machine learning classification problem where a real value score in $\left[0,1\right]$ is assigned to each class $c$ occurring in the KB, and $\mathcal {C}_l$ is the set of classes determined by the assigned score with strategies e.g., adopting a class if its score exceeds some threshold. The second subtask is modeled as an entity lookup problem constrained by $\mathcal {C}_l$ .
It is important to note that:
When we talk about a literal $l$ we mean the occurrence of $l$ in a triple $\langle s,p,l \rangle $ . Lexically equivalent literals might be treated very differently depending on their triple contexts.
If the KB is an OWL DL ontology, then the set of object properties (which connect two entities) and data properties (which connect an entity to a literal) should be disjoint. In practice, however, KBs such as DBpedia often don't respect this constraint. In any case, we avoid the issue by simply computing the relevant typing and canonicalization information, and leaving it up to applications as to how they want to exploit it.
We assume that no manual annotations or external labels are given — the classifier is automatically trained using the KB.
Technical Framework
The technical framework for the classification problem is shown in Fig. 1 . It involves three main steps: (i) candidate class extraction; (ii) model training and prediction; and (iii) literal typing and canonicalization.
Popular KBs like DBpedia often contain a large number of classes. For efficiency reasons, and to reduce noise in the learning process, we first identify a subset of candidate classes. This selection should be rather inclusive so as to maximize potential recall. In order to achieve this we pool the candidate classes for all literals occurring in triples with a given property; i.e., to compute the candidate classes for a literal $ł$ occurring in a triple $\langle s,p,l \rangle $ , we consider all triples that use property $p$ . Note that, as discussed above, in practice such triples may include both literals and entities as their objects. We thus use two techniques for identifying candidate classes from the given set of triples. In the case where the object of the triple is an entity, the candidates are just the set of classes that this entity is an instance of. In practice we identify the candidates for the set of all such entities, which we denote $E_P$ , via a SPARQL query to the KB, with the resulting set of classes being denoted $C_P$ . In the case where the object of the triple is a literal, we first match the literal to entities using a lexical index which is built based on the entity's name, labels and anchor text (description). To maximize recall, the literal, its tokens (words) and its sub-phrases are used to retrieve entities by lexical matching; this technique is particularly effective when the literal is a long phrase. As in the first case, we identify all relevant entities, which we denote $E_M$ , and then retrieve the relevant classes $C_M$ using a SPARQL query. The candidate class set is simply the union of $C_P$ and $C_M$ , denoted as $C_{PM}$ .
We adopt the strategy of training one binary classifier for each candidate class, instead of multi-class classification, so as to facilitate dealing with the class hierarchy BIBREF16 . The classifier architecture includes an input layer with word embedding, an encoding layer with bidirectional RNNs, an attention layer and a fully connected (FC) layer for modeling the contextual semantics of the literal. To train a classifier, both positive and negative entities (samples), including those from $E_M$ (particular samples) and those outside $E_M$ (general samples) are extracted from the KB, with external KBs and logical constraints being used to improve sample quality. The trained classifiers are used to compute a score for each candidate class.
The final stage is to semantically type and, where possible, canonicalise literals. For a given literal, two strategies, independent and hierarchical, are used to determine its types (classes), with a score for each type. We then use these types and scores to try to identify an entity in the KB that could reasonably be substituted for the literal.
Prediction Model
Given a phrase literal $l$ and its associated RDF triple $\langle s, p, l \rangle $ , our neural network model aims at utilizing the semantics of $s$ , $p$ and $l$ for the classification of $l$ . The architecture is shown in Fig. 2 . It first separately parses the subject label, the property label and the literal into three word (token) sequences whose lengths, denoted as $T_s$ , $T_p$ and $T_l$ , are fixed to the maximum subject, property and literal sequence lengths from the training data by padding shorter sequences with null words. We then concatenate the three sequences into a single word sequence ( $word_t, t \in \left[1,T\right]$ ), where $\langle s, p, l \rangle $0 . Each word is then encoded into a vector via word embedding (null is encoded into a zero vector), and the word sequence is transformed into a vector sequence ( $\langle s, p, l \rangle $1 ). Note that this preserves information about the position of words in $\langle s, p, l \rangle $2 , $\langle s, p, l \rangle $3 and $\langle s, p, l \rangle $4 .
The semantics of forward and backward surrounding words is effective in predicting a word's semantics. For example, “Port” and “Meadow” are more likely to indicate a place as they appear after “Area” and before “Oxford”. To embed such contextual semantics into a feature vector, we stack a layer composed of bidirectional Recurrent Neural Networks (BiRNNs) with Gated Recurrent Unit (GRU) BIBREF17 . Within each RNN, a reset gate $r_t$ is used to control the contribution of the past word, and an update gate $z_t$ is used to balance the contributions of the past words and the new words. The hidden state (embedding) at position $t$ is computed as
$${\left\lbrace \begin{array}{ll} h_t = (1-z_t) \odot h_{t-1} + z_t \odot \tilde{h}_t, \\ \tilde{h}_t = \tau (W_h x_t + r_t \odot (U_h h_{t-1}) + b_h), \\ z_t = \sigma (W_z x_t + U_z h_{t-1} + b_z), \\ r_t = \sigma (W_r x_t + U_r h_{t-1} + b_r), \end{array}\right.}$$ (Eq. 13)
where $\odot $ denotes the Hadamard product, $\sigma $ and $\tau $ denote the activation function of sigmod and tanh respectively, and $W_h$ , $U_h$ , $b_h$ , $W_z$ , $U_z$ , $b_z$ , $W_r$ , $\sigma $0 and $\sigma $1 are parameters to learn. With the two bidirectional RNNs, one forward hidden state and one backward hidden state are calculated for the sequence, denoted as ( $\sigma $2 ) and ( $\sigma $3 ) respectively. They are concatenated as the output of the RNN layer: $\sigma $4 .
We assume different words are differently informative towards the type of the literal. For example, the word “port” is more important than the other words in distinguishing the type Wetland from other concrete types of Place. To this end, an attention layer is further stacked. Given the input from the RNN layer ( $h_t, t \in \left[1,T \right]$ ), the attention layer outputs $h_a = \left[\alpha _t h_t \right], t \in \left[1,T \right]$ , where $\alpha _t$ is the normalized weight of the word at position $t$ and is calculated as
$${\left\lbrace \begin{array}{ll} \alpha _t = \frac{exp(u^T_t u_w)}{\sum _{t \in \left[1,T\right]} exp (u^T_t u_w)} \\ u_t = \tau (W_w h_t + b_w), \end{array}\right.}$$ (Eq. 14)
where $u_w$ , $W_w$ and $b_w$ are parameters to learn. Specifically, $u_w$ denotes the general informative degrees of all the words, while $\alpha _t$ denotes the attention of the word at position $t$ w.r.t. other words in the sequence. Note that the attention weights can also be utilized to justify a prediction. In order to exploit information about the location of a word in the subject, property or literal, we do not calculate the weighted sum of the BiRNN output but concatenate the weighted vectors. The dimension of each RNN hidden state (i.e., $\overleftarrow{h_t}$ and $\overrightarrow{h_t}$ ), denoted as $d_r$ , and the dimension of each attention layer output (i.e., $\alpha _t h_t$ ), denoted as $W_w$0 , are two hyper parameters of the network architecture.
A fully connected (FC) layer and a logistic regression layer are finally stacked for modeling the nonlinear relationship and calculating the output score respectively:
$$ f(s, p, l) = \sigma (W_f h_a + b_f),$$ (Eq. 15)
where $W_f$ and $b_f$ are the parameters to learn, $\sigma $ denotes the sigmod function, and $f$ denotes the function of the whole network.
Sampling and Training
We first extract both particular samples and general samples from the KB using SPARQL queries and reasoning; we then improve sample quality by detecting and repairing wrong and missing entity classifications with the help of external KBs; and finally we train the classifiers.
Particular samples are based on the entities $E_M$ that are lexically matched by the literals. For each literal candidate class $c$ in $C_M$ , its particular samples are generated by:
Extracting its positive particular entities: $E_M^c = \left\lbrace e | e \in E_M, e \text{ is an instance of } c \right\rbrace $ ;
Generating its positive particular samples as
$$\mathcal {P}_c^{+} = \cup _{e \in E_M^c} \left\lbrace \langle s,p,l \rangle | s \in S(p,e), l \in L(e) \right\rbrace ,$$ (Eq. 20)
where $S(p,e)$ denotes the set of entities occurring in the subject position in a triple of the form $\langle s, p, e\rangle $ , and $L(e)$ denotes all the labels (text phrases) of the entity $e$ ;
Extracting its negative particular entities $E_M^{\widetilde{c}}$ as those entities in $E_M$ that are instances of some sibling class of $c$ and not instances of $c$ ;
Generating its negative particular samples $\mathcal {P}_c^-$ with $E_M^{\widetilde{c}}$ using the same approach as for positive samples.
Given that the literal matched candidate classes $C_M$ are only a part of all the candidate classes $C_{PM}$ , and that the size of particular samples may be too small to train the neural network, we additionally generate general samples based on common KB entities. For each candidate class $c$ in $C_{PM}$ , all its entities in the KB, denoted as $E^c$ , are extracted and then its positive general samples, denoted as $\mathcal {G}_c^+$ , are generated from $E^c$ using the same approach as for particular samples. Similarly, entities of the sibling classes of $c$ , denoted as $E^{\widetilde{c}}$ , are extracted, and general negative samples, denoted as $\mathcal {G}_c^-$ , are generated from $C_{PM}$0 . As for negative particular entities, we check each entity in $C_{PM}$1 and remove those that are not instances of $C_{PM}$2 .
Unlike the particular samples, the positive and negative general samples are balanced. This means that we reduce the size of $\mathcal {G}_c^+$ and $\mathcal {G}_c^-$ to the minimum of $\#(\mathcal {G}_c^+)$ , $\#(\mathcal {G}_c^-)$ and $N_0$ , where $\#()$ denotes set cardinality, and $N_0$ is a hyper parameter for sampling. Size reduction is implemented via random sampling.
Many KBs are quite noisy, with wrong or missing entity classifications. For example, when using the SPARQL endpoint of DBpedia, dbr:Scotland is classified as dbo:MusicalArtist instead of as dbo:Country, while dbr:Afghan appears without a type. We have corrected and complemented the sample generation by combining the outputs of more than one KB. For example, the DBpedia endpoint suggestions are compared against Wikidata and the DBpedia lookup service. Most DBpedia entities are mapped to Wikidata entities whose types are used to validate and complement the suggested types from the DBpedia endpoint. In addition, the lookup service, although incomplete, typically provides very precise types that can also confirm the validity of the DBpedia endpoint types. The validation is performed by identifying if the types suggested by one KB are compatible with those returned by other KBs, that is, if the relevant types belong to the same branch of the hierarchy (e.g., the DBpedia taxonomy). With the new entity classifications, the samples are revised accordingly.
We train a binary classifier $f^c$ for each class $c$ in $C_{PM}$ . It is first pre-trained with general samples $\mathcal {G}_{c}^+ \cup \mathcal {G}_{c}^-$ , and then fine tuned with particular samples $\mathcal {P}_{c}^+ \cup \mathcal {P}_{c}^-$ . Pre-training deals with the shortage of particular samples, while fine-tuning bridges the gap between common KB entities and the entities associated with the literals, which is also known as domain adaptation. Given that pre-training is the most time consuming step, but is task agnostic, classifiers for all the classes in a KB could be pre-trained in advance to accelerate a specific literal canonicalization task.
Independent and Hierarchical Typing
In prediction, the binary classifier for class $c$ , denoted as $f^c$ , outputs a score $y_l^c$ indicating the probability that a literal $l$ belongs to class $c$ : $y_l^c = f^c(l)$ , $y_l^c \in \left[0,1\right]$ . With the predicted scores, we adopt two strategies – independent and hierarchical to determine the types. In the independent strategy, the relationship between classes is not considered. A class $c$ is selected as a type of $l$ if its score $y_l^c \ge \theta $ , where $f^c$0 is a threshold hyper parameter in $f^c$1 .
The hierarchical strategy considers the class hierarchy and the disjointness between sibling classes. We first calculate a hierarchical score for each class with the predicted scores of itself and its descendents:
$$s_l^c = max\left\lbrace y_l^{c^{\prime }} | c^{\prime } \sqsubseteq c,\text{ } c^{\prime } \in C_{PM} \right\rbrace ,$$ (Eq. 28)
where $\sqsubseteq $ denotes the subclass relationship between two classes, $C_{PM}$ is the set of candidate classes for $l$ , and $max$ denotes the maximum value of a set. For a candidate class $c^{\prime }$ in $C_{PM}$ , we denote all disjoint candidate classes as $\mathcal {D}(C_{PM}, c^{\prime })$ . They can be defined as sibling classes of both $c^{\prime }$ and its ancestors, or via logical constraints in the KB. A class $c$ is selected as a type of $l$ if (i) its hierarchical score $C_{PM}$0 , and (ii) it satisfies the following soft exclusion condition:
$$s_l^c - max\left\lbrace s_l^{c^{\prime }} | c^{\prime } \in \mathcal {D}(C_{PM}, c) \right\rbrace \ge \kappa ,$$ (Eq. 29)
where $\kappa $ is a relaxation hyper parameter. The exclusion of disjoint classes is hard if $\kappa $ is set to 0, and relaxed if $\kappa $ is set to a negative float with a small absolute value e.g., $-0.1$ .
Finally, for a given literal $l$ , we return the set of all selected classes as its types $\mathcal {C}_l$ .
Canonicalization
Given a literal $l$ , we use $\mathcal {C}_l$ to try to identify an associated entity. A set of candidate entities are first retrieved using the lexical index that is built on the entity's name, label, anchor text, etc. Unlike candidate class extraction, here we use the whole text phrase of the literal, and rank the candidate entities according to their lexical similarities. Those entities that are not instances of any classes in $\mathcal {C}_l$ are then filtered out, and the most similar entity among the remainder is selected as the associated entity for $l$ . If no entities are retrieved, or all the retrieved entities are filtered out, then the literal could be associated with a new entity whose types are those most specific classes in $\mathcal {C}_l$ . In either case we can improve the quality of our results by checking that the resulting entities would be consistent if added to the KB, and discarding any entity associations that would lead to inconsistency.
Experiment Setting
In the experiments, we adopt a real literal set (R-Lite) and a synthetic literal set (S-Lite) , both of which are extracted from DBpedia. R-Lite is based on the property and literal pairs published by Gunaratna et al. in 2016 BIBREF4 . We refine the data by (i) removing literals that no longer exist in the current version of DBpedia; (ii) extracting new literals from DBpedia for properties whose existing literals were all removed in step (i); (iii) extending each property and literal pair with an associated subject; and (iv) manually adding ground truth types selected from classes defined in the DBpedia Ontology (DBO). To fully evaluate the study with more data, we additionally constructed S-Lite from DBpedia by repeatedly: (i) selecting a DBpedia triple of the form $\langle s,p,e \rangle $ , where $e$ is an entity; (ii) replacing $e$ with it's label $l$ to give a triple $\langle s,p,l \rangle $ ; (iii) eliminating the entity $e$ from DBpedia; and (iv) adding as ground truth types the DBpedia classes of which $e$ is (implicitly) an instance. More data details are shown in Table 1 .
In evaluating the typing performance, Precision, Recall and F1 Score are used. For a literal $l$ , the computed types $\mathcal {C}_l$ are compared with the ground truths $\mathcal {C}_l^{gt}$ , and the following micro metrics are calculated: $P_l = {\# (\mathcal {C}_l \cap \mathcal {C}_l^{gt}) }{\# (\mathcal {C}_l)}$ , $R_l = {\# (\mathcal {C}_l \cap \mathcal {C}_l^{gt} )}{\# (\mathcal {C}_l^{gt})}$ , and ${F_1}_l = {(2 \times P_l \times R_l)}{(P_l + R_l)}$ . They are then averaged over all the literals as the final Precision, Recall and F1 Score of a literal set. Although F1 Score measures the overall performance with both Precision and Recall considered, it depends on the threshold hyper parameter $\theta $ as with Precision and Recall. Thus we let $\theta $ range from 0 to 1 with a step of $0.01$ , and calculate the average of all the F1 Scores (AvgF1@all) and top 5 highest F1 Scores (AvgF1@top5). AvgF1@all measures the overall pattern recognition capability, while AvgF1@top5 is relevant in real applications where we often use a validation data set to find a $\theta $ setting that is close to the optimum. We also use the highest (top) Precision in evaluating the sample refinement.
In evaluating entity matching performance, Precision is measured by manually checking whether the identified entity is correct or not. S-Lite is not used for entity matching evaluation as the corresponding entities for all its literals are assumed to be excluded from the KB. We are not able to measure recall for entity matching as we do not have the ground truths; instead, we have evaluated entity matching with different confidence thresholds and compared the number of correct results.
The evaluation includes three aspects. We first compare different settings of the typing framework, analyzing the impacts of sample refinement, fine tuning by particular samples, BiRNN and the attention mechanism. We also compare the independent and hierarchical typing strategies. We then compare the overall typing performance of our framework with (i) Gunaratna et al. BIBREF4 , which matches the literal to both classes and entities; (ii) an entity lookup based method; and (iii) a probabilistic property range estimation method. Finally, we analyze the performance of entity matching with and without the predicted types.
The DBpedia lookup service, which is based on the Spotlight index BIBREF18 , is used for entity lookup (retrieval). The DBpedia SPARQL endpoint is used for query answering and reasoning. The reported results are based on the following settings: the Adam optimizer together with cross-entropy loss are used for network training; $d_r$ and $d_a$ are set to 200 and 50 respectively; $N_0$ is set to 1200; word2vec trained with the latest Wikipedia article dump is adopted for word embedding; and ( $T_s$ , $T_p$ , $T_l$ ) are set to (12, 4, 12) for S-Lite and (12, 4, 15) for R-Lite. The experiments are run on a workstation with Intel(R) Xeon(R) CPU E5-2670 @2.60GHz, with programs implemented by Tensorflow.
Results on Framework Settings
We first evaluate the impact of the neural network architecture, fine tuning and different typing strategies, with their typing results on S-Lite shown in Table 2 and Fig. 3 . Our findings are supported by comparable results on R-Lite. We further evaluate sample refinement, with some statistics of the refinement operations as well as performance improvements shown in Fig. 4 .
According to Table 2 , we find BiRNN significantly outperforms Multiple Layer Perceptron (MLP), a basic but widely used neural network model, while stacking an attention layer (AttBiRNN) further improves AvgF1@all and AvgF1@top5, for example by $3.7\%$ and $3.1\%$ respectively with hierarchical typing ( $\kappa $ = $-0.1$ ). The result is consistent for both pre-trained models and fine tuned models, using both independent and hierarchical typing strategies. This indicates the effectiveness of our neural network architecture. Meanwhile, the performance of all the models is significantly improved after they are fine tuned by the particular samples, as expected. For example, when the independent typing strategy is used, AvgF1@all and AvgF1@top5 of AttBiRNN are improved by $54.1\%$ and $35.2\%$ respectively.
The impact of independent and hierarchical typing strategies is more complex. As shown in Table 2 , when the classifier is weak (e.g., pre-trained BiRNN), hierarchical typing with both hard exclusion ( $\kappa $ = 0) and relaxed exclusion ( $\kappa $ = $-0.1$ ) has higher AvgF1@all and AvgF1@top5 than independent typing. However, when a strong classifier (e.g., fine tuned AttBiRNN) is used, AvgF1@all and AvgF1@top5 of hierarchical typing with relaxed exclusion are close to independent typing, while hierarchical typing with hard exclusion has worse performance. We further analyze Precision, Recall and F1 Score of both typing strategies under varying threshold ( $\theta $ ) values, as shown in Fig. 3 . In comparison with independent typing, hierarchical typing achieves (i) more stable Precision, Recall and F1 Score curves; and (ii) significantly higher Precision, especially when $\theta $ is small. Meanwhile, as with the results in Table 2 , relaxed exclusion outperforms hard exclusion in hierarchical typing except for Precision when $\theta $ is between 0 and $0.05$ .
Fig. 4 [Right] shows the ratio of positive and negative particular samples that are deleted and added during sample refinement. The AttBiRNN classifiers fine tuned by the refined particular samples are compared with those fine tuned by the original particular samples. The improvements on AvgF1@all, AvgF1@top5 and top Precision, which are based on the average of the three above typing settings, are shown in Fig. 4 [Left]. On the one hand, we find sample refinement benefits both S-Lite and R-Lite, as expected. On the other hand, we find the improvement on S-Lite is limited, while the improvement on R-Lite is quite significant: F1@all and top Precision, e.g., are improved by around $0.8\%$ and $1.8\%$ respectively on S-Lite, but $4.3\%$ and $7.4\%$ respectively on R-Lite. This may be due to two factors: (i) the ground truths of S-Lite are the entities' class and super classes inferred from the KB itself, while the ground truths of R-Lite are manually labeled; (ii) sample refinement deletes many more noisy positive and negative samples (which are caused by wrong entity classifications of the KB) on R-Lite than on S-Lite, as shown in Fig. 4 [Right].
Results on Semantic Typing
Table 3 displays the overall semantic typing performance of our method and the baselines. Results for two optimum settings are reported for each method. The baseline Entity-Lookup retrieves one or several entities using the whole phrase of the literal, and uses their classes and super classes as the types. Gunaratna BIBREF4 matches the literal's focus term (head word) to an exact class, then an exact entity, and then a class with the highest similarity score. It stops as soon as some classes or entities are matched. We extend its original “exact entity match" setting with “relaxed entity match" which means multiple entities are retrieved. Property Range Estimation gets the classes and super classes from the entity objects of the property, and calculates the score of each class as the ratio of entity objects that belong to that class. (H/I, $\kappa $ , $\cdot $ )@top-P (F1) denotes the setting where the highest Precision (F1 Score) is achieved.
As we can see, AttBiRNN achieves much higher performance than all three baselines on both S-Lite and R-Lite. For example, the F1 Score of AttBiRNN is $67.6\%$ , $160.2\%$ and $13.8\%$ higher than those of Gunaratna, Entity-Lookup and Property Range Estimation respectively on S-Lite, and $28.5\%$ , $58.3\%$ and $37.9\%$ higher respectively on R-Lite. AttBiRNN also has significantly higher Precision and Recall, even when the setting is adjusted for the highest F1 Score. This is as expected, because our neural network, which learns the semantics (statistical correlation) from both word vector corpus and KB, models and utilizes the contextual meaning of the literal and its associated triple, while Gunaratna and Entity-Lookup are mostly based on lexical similarity. The performance of Property Range Estimation is limited because the object annotation in DBpedia usually does not follow the property range, especially for those properties in R-Lite. For example, objects of the property dbp:office have 35 DBO classes, ranging from dbo:City and dbo:Country to dbo:Company.
It is also notable that AttBiRNN and Property Range Estimation perform better on S-Lite than on R-Lite. The top F1 Score is $20.7\%$ and $46.2\%$ higher respectively, while the top Precision is $11.4\%$ and $43.6\%$ higher respectively. This is because R-Lite is more noisy, with longer literals, and has more ground truth types on average (cf. Table 1 ), while S-Lite has fewer properties, and each property has a large number of entity objects, which significantly benefits Property Range Estimation. In contrast, the two entity matching based methods, Gunaratna and Entity-Lookup, perform worse on S-Lite than on R-Lite; this is because the construction of S-Lite removes those KB entities from which literals were derived. Gunaratna outperforms Entity-Lookup as it extracts the head word and matches it to both entities and classes. Note that the head word is also included in our candidate class extraction with lookup.
Results on Entity Matching
Table 4 displays the number of correct matched entities and the Precision of entity matching on R-Lite. The types are predicted by the fine-tuned AttBiRNN with independent typing and two threshold settings. We can see that Precision is improved when the retrieved entities that do not belong to any of the predicted types are filtered out. The improvement is $6.1\%$ and $5.8\%$ when $\theta $ is set to $0.15$ and $0.01$ respectively. Meanwhile, although the total number of matches may decrease because of the filtering, the number of correct matches still increases from 396 to 404 ( $\theta =0.01$ ). This means that Recall is also improved.
Related Work
Work on KB quality issues can can be divided into KB quality assessment BIBREF2 , BIBREF1 , and KB quality improvement/refinement BIBREF3 . The former includes error and anomaly detection methods, such as test-driven and query template based approaches BIBREF19 , BIBREF20 , with statistical methods BIBREF21 and consistency reasoning BIBREF22 also being applied to assess KB quality with different kinds of metric. The latter includes (i) KB completion, such as entity classification BIBREF7 , BIBREF8 , BIBREF9 , relation prediction BIBREF23 and data typing BIBREF15 ; and (ii) KB diagnosis and repair, such as abnormal value detection BIBREF20 , erroneous identity link detection BIBREF24 and data mapping (e.g., links to Wikipedia pages) correction BIBREF25 .
KB canonicalization refers to those refinement works that deal with redundant and ambiguous KB components as well as poorly expressed knowledge with limited reasoning potential. Some works in open information extraction (IE) BIBREF26 , BIBREF27 , BIBREF28 aim to identify synonymous noun phrases and relation phrases of open KBs which are composed of triple assertions extracted from text without any ontologies. For example, the recently proposed CESI method BIBREF27 utilizes both learned KB embeddings and side information like WordNet to find synonyms via clustering. Other works analyze synonyms for ontological KBs. Abedjan et al. BIBREF29 discovered synonymously used predicates for query expansion on DBpedia. Pujara et al. BIBREF30 identified coreferent entities of NELL with ontological constraints considered. These clustering, embedding, or entity linking based methods in open IE however can not be directly applied or do not work well for our KB literal canonicalization. The utilization of these techniques will be in our future work.
String literals in ontological KBs such as DBpedia often represent poorly expressed knowledge, with semantic types and coreferent entities missed. As far as we known, canonicalization of such literals has been little studied. Gunaratna et al. BIBREF4 typed the literal by matching its head term to ontology classes and KB entities, but the literal context (e.g., the associated subject and property) and semantic meaning of the composition words were not utilized. Some ideas of entity classification can be borrowed for literal typing but will become ineffective as the context differs. For example, the baseline Property Range Estimation in our experiments uses the idea of SDType BIBREF8 — utilizing the statistical distribution of types in the subject position and object position of properties to estimate an entity's type probabilities. As a literal is associated with only one property, such probabilistic estimation becomes inaccurate (cf. results in Table 3 ).
Our literal classification model is in some degree inspired by those natural language understanding and web table annotation works that match external noun phrases to KB types and entities BIBREF14 , BIBREF10 , BIBREF12 using neural networks and semantic embeddings for modeling the contextual semantics. For example, Luo et al. BIBREF10 learned features from the surrounding cells of a target cell to predict its entity association. However the context in those works is very different, i.e., a simple regular structure of rows/columns with limited (table) metadata. In contrast, KBs have a complex irregular structure and rich meta data (the knowledge captured in the KB). Differently from these works, we developed different methods, e.g., candidate class extraction and high quality sampling, to learn the network from the KB with its assertions, terminologies and reasoning capability.
Discussion and Outlook
In this paper we present our study on KB literal canonicalization — an important problem on KB quality that has been little studied. A new technical framework is proposed with neural network and knowledge-based learning. It (i) extracts candidate classes as well as their positive and negative samples from the KB by lookup and query answering, with their quality improved using an external KB; (ii) trains classifiers that can effectively learn a literal's contextual features with BiRNNs and an attention mechanism; (iii) identifies types and matches entity for canonicalization. We use a real data set and a synthetic data set, both extracted from DBpedia, for evaluation. It achieves much higher performance than the baselines that include the state-of-the-art. We discuss below some more subjective observations and possible directions for future work.
Acknowledgments
The work is supported by the AIDA project (U.K. Government's Defence & Security Programme in support of the Alan Turing Institute), the SIRIUS Centre for Scalable Data Access (Research Council of Norway, project 237889), the Royal Society, EPSRC projects DBOnto, $\text{MaSI}^{\text{3}}$ and $\text{ED}^{\text{3}}$ . | 0.8320 on semantic typing, 0.7194 on entity matching |
65e2f97f2fe8eb5c2fa41cb95c02b577e8d6e5ee | 65e2f97f2fe8eb5c2fa41cb95c02b577e8d6e5ee_0 | Q: How did they measure effectiveness?
Text: Introduction
Modern speech-based assistants, such as Amazon Alexa, Google Home, Microsoft Cortana, and Apple Siri, enable users to complete daily tasks such as shopping, setting reminders, and playing games using voice commands. Such human-like interfaces create a rich experience for users by enabling them to complete many tasks hands- and eyes-free in a conversational manner. Furthermore, these services offer tools to enable developers and customers to create custom voice experiences (skills) and as a result extend the capabilities of the assistant. Amazon's Alexa Skills Kit BIBREF0, Google's Actions and Microsoft's Cortana Skills Kit are examples of such tools. As the number of skills (with potentially overlapping functionality) increases, it becomes more difficult for end users to find the skills that can address their request.
To mitigate the skill discovery problem, recently researchers have proposed solutions for personalized domain selection and continuous domain adaptation in speech-based assistants BIBREF1, BIBREF2. Although such solutions help users find skills, in scenarios such as searching for a game where many different skills exist and user's preferences change, routing the user to a particular experience would not be satisfactory. In such cases, the assistant should initiate a conversation with the user, making recommendations, asking for preferences, and allowing the user to browse through different options. Similar to other search problems, personalization is important for conversational skill discovery and can be achieved at two levels: 1) personalization of skill recommendations, and 2) personalization of the interaction. Users have evolving attributes (e.g., first-time vs returning user) and different conversational styles and preferences (e.g., brief vs verbose communication) which affect how they respond to what the agent is proposing and its recommendations. By personalizing the interaction according to user attributes, conversational styles and preferences, the speech-based assistant can help speed up the conversation process BIBREF3 and increase user satisfaction. However, existing works are limited with respect to considering user's evolving attributes and diverse multi-aspect preferences BIBREF4, such as preferences with respect to how the conversational agent interacts with them.
In this paper, we focus on conversational discovery of skills to guide customers from an intent to a specific skill or set of skills that can serve their request. To this end, we start with a rule-based agent and improve it by using reinforcement learning (RL), enabling the agent to adapt to different conversational styles as it interacts with users. In summary, the contributions of this paper are as follows: 1) We introduce the problem of conversational skill discovery for large-scale virtual assistants. 2) We describe a solution which enables the assistant to adapt to user's attributes (e.g., first-time user vs returning user) and conversational styles (e.g., brief vs. verbose). 3) We conduct experiments in a real production setting by deploying the agent to interact with real users in large scale, showing that the personalized policy learned using RL significantly outperforms a one-fits-all rule-based agent in terms of success rate (measured in terms of number of dialogs which result in launching a skill) with significantly shorter dialogs.
Conversational Skill Discovery
Conversational skill discovery is the task of initiating a dialog with the user in order to help them find the skills that address their needs when interacting with a speech-based assistant. More specifically, a conversational skill discovery agent receives a natural language input from the user, understands it using its automatic speech recognition (ASR) and natural language understanding (NLU) components, and decides how to respond to the user based on user provided and contextual information in order to help the user find the needed skill. Skills can often be grouped into categories and subcategories based on functionality (e.g., ride-sharing skills or trivia games). These categories help customers explore with much more specificity and relevance, as such a key functionality of a skill discovery system is to allow users to browse through existing categories. Additionally, it is important for the agent to be able to adapt to user's conversational styles, overtime shifting to more and more personalized conversations with the user.
Table TABREF1 shows an example of a dialog between a user and an agent. Here, in each turn of the dialog, the user can either ask for a particular category or skill, select from the list of recommendations, accept or reject a recommendation, ask for other (sub)categories or skills, ask for details or rating of a skill, or perform some general action such as asking for help, asking the agent to repeat the previous prompt, going over a list of recommendations, going back in the conversation, or asking the agent to stop. The agent, on the other hand, can suggest a skill, provide information or help, offer a few different types of categories to choose from, stop the conversation if it is not going well, or launch a selected skill.
Conversational Skill Discovery ::: Problem Formulation
Conversational skill discovery, similar to other goal-oriented dialog systems, can be formalized as a Markov Decision Process (MDP) BIBREF5. An MDP is a tuple $<\mathcal {S}, \mathcal {A}, \mathcal {P}, \mathcal {R}, \mathcal {\gamma }>$, where $\mathcal {S}$ is the state space, $\mathcal {A}$ is the action space, $\mathcal {P}$ is the transition probability function, $\mathcal {R}$ is the reward function, and $\mathcal {\gamma }$ is the discount factor. In this framework, at each time step t, the agent observes state $s_t \in \mathcal {S}$ and selects action $a_t \in \mathcal {A}$ according to its policy ($\pi : \mathcal {S} \rightarrow \mathcal {A}$). After performing the selected action, the agent receives the next state $s_{t+1}$ and a scalar reward $r_t$. The trajectory restarts after the agent reaches a terminal state. RL solvers have been used to find the optimal dialog policy (e.g., BIBREF6 BIBREF6; BIBREF7 BIBREF7; BIBREF8 BIBREF8; BIBREF9 BIBREF9). In this context, at each turn the agent acts based on its understanding of what the user said, and reward function is modeled in terms of various dimensions of the interaction such as per-interaction user satisfaction, accomplishment of the task, efficiency of interaction, and dialog duration. Recently, deep RL has also been applied to the problem of dialog management and has shown improvements over rule-based systems BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14.
In this paper, we adopt the above formalism with the goal of training a dialog policy which allows the agent to take actions that maximize its success rate (measured in terms of number of dialogs which result in launching a skill) while providing a flexible and natural way for the user to navigate throughout various dialog states. In each turn of the dialog, the agent makes its decisions based on various available information such as user's intent (e.g., asking for a particular skill), the category the user has selected, whether the user is a first-time user, etc. In order to make the agent adapt to different conversational styles, when making recommendations, we focus on 1) whether to recommend skills or categories, 2) how many skills or categories to recommend, and 3) what type of metadata to provide to the user. Examples of metadata include: popularity, star rating, number of reviews, or a short description of the skill. The agent can proactively provide metadata to the user at certain points in the experience. Depending on user's conversational style, they may prefer brief conversations with the agent (i.e., no metadata), or verbose with different types of metadata.
An important challenge in using RL for learning dialog policies is creating realistic user simulators that can generate natural conversations similar to a human user BIBREF15, and as such in previous works researchers have focused on the development of different types of user simulators (e.g., BIBREF16 BIBREF16; BIBREF17 BIBREF17; BIBREF18 BIBREF18; BIBREF19 BIBREF19; BIBREF15 BIBREF15; BIBREF20 BIBREF20; BIBREF21 BIBREF21). We take a data-driven approach to user simulation, and start with a rule-based policy to gather data and then improve the agent by using RL.
Conversational Skill Discovery ::: Rule-based Agent
The rule-based agent selects from the following actions depending on user's intent in each turn of the dialog: 1) offering k categories ($1 \le k \le 5$), 2) offering n skills ($1 \le n \le 3$), 3) offering a skill or asking for category, 4) providing information about skill rating, 5) providing details about a skill, 6) ending the conversation, and 7) launching a skill. When multiple actions are possible, the rule-based agent randomly selects among them. For example, at the beginning of the dialog, the agent randomly selects among different offer actions. If all skills in a category have been exhausted, the agent will inform the user that no additional skills are available for the selected category. Furthermore, each action is mapped to a specific prompt template. For example, offering a skill or asking for category can be mapped to "Would you like to launch $<$skill$>$ or try a different type of skill?", where the specific skill is provided by a skill recommendation system. Additionally, in cases where the agent does not understand what the user has said (e.g., out-of-domain requests), it will first repeat the previous prompt, if user's utterance is again misunderstood, it will give a new prompt, and finally it will stop the conversation.
Conversational Skill Discovery ::: User Simulation
We deployed the rule-based system to gather dialogs with users and trained a user simulator similar to BIBREF13 with $180,000$ dialogs with real users. Note that the collected dialogs are not annotated and may include understanding errors. Figure FIGREF5 illustrates the interaction between the user simulator (left) and the conversational agent (right). More specifically, the user simulator first generates the next user intent based on dialog context. Intent generation is modeled as a language modeling problem. In this formulation, each possible intent forms a token in the vocabulary, and every training dialog becomes a training intent sequence. For example, the sequence for the conversation in Table TABREF1 is [Start, CategoryName, CategoryName, GetRating, Yes, End].
We used recurrent neural networks with Gated Recurrent Unit (GRU) BIBREF22 to predict the next user intent, and used the following for dialog context: 1) previous user intent, 2) previous agent action, 3) previous agent prompt, 4) whether the user is a first-time user, 5) whether the user has already selected an item (skill or category) from a list, and 6) number of user turns so far in the conversation. The optimal parameters were found using Hyperopt BIBREF23 and the model with lowest perplexity BIBREF24 score was chosen. Given the predicted intent, the user simulator uniformly samples one utterance from the combination of available templates and user turns in the collected dialogs.
Conversational Skill Discovery ::: RL-based Agent
The components used to learn dialog policies using RL are as follows.
State Space S: The input state is composed of 1) user's intent 2) previous action the agent took, 3) previous prompt and metadata it gave the user, 4) the category the user has selected if any, 5) whether the agent has proposed a skill, 6) whether the user is a first-time user, and 7) number of user turns so far in the dialog. This set of parameters were selected using a forward feature selection approach based on the correlation between the new feature and the feature set with the goal of achieving a higher Expected Cumulative Reward (ECR) BIBREF25. This set can be augmented with user preferences regarding skills, the last skill launched by the user, or the frequency of skill launches.
Action Space A: We constrain the action space of the agent to a set of composite actions: 1) offering k categories (e.g., offer-one-category, offer-two-category), 2) offering n skills (e.g., offer-one-skill, offer-two-skill), 3) offering a skill or asking for category (e.g., offer-one-skill-or-category), 4) executing a user request, 5) ending the conversation, and 6) launching a skill. The execute action refers to delivering information such as providing skill ratings or more details about a skill, repeating the previous prompt, or handling out-of-domain requests. At run-time, the RL policy falls back on the rule-based policy for the execute action.
Reward R: We use a simple reward function based on goal completion, where the environment gives a reward of $+1$ at the end of the dialog if the user launches a skill, and gives a reward of $-1$ if the user or agent end the dialog.
Policy: We use DQN BIBREF26, BIBREF27 with action masking for the RL agent, with a fully-connected MLP to represent the deep Q-network. The hidden layers use a rectifier nonlinearity, and the output layer is a fully connected layer with linear activation function and a single output for each valid action. The action mask suppresses impossible actions in any particular dialog state, such as launching a skill before the user has selected one.
Experimental Results
We focused on the use case of a user searching for a game to play among $1,903$ skills belonging to 48 game categories. Each category may also have subcategories, resulting in 191 total categories. Example of categories are adventure, trivia, choose your own story, family, and kids. The number of categories to offer k is set to one, three, and five; and the number of skills to offer n is set to one, based on the results of internal user studies. Table TABREF7 summarizes the state and action spaces. For all agents, we randomly sample from the set of possible prompts and metadata for the selected action. Furthermore, we used the Alexa Skill portal to train the NLU model from a set of sample utterances.
Experimental Results ::: Simulation Results
We trained the DQN agent using an $\epsilon $-greedy policy with $\epsilon $ decreasing linearly from 1 to $0.1$ over $100,000$ steps. Additionally, we tuned a window size to include previous dialog turns as input and set $\gamma $ to $0.9$. We ran the method 30 times for $150,000$ steps, and in each run, after every 10,000 steps, we sampled $3,000$ dialog episodes with no exploration to evaluate the performance. The optimal parameters were found using Hyperopt BIBREF23 (see Appendix B). Figure FIGREF9 shows the simulation results during training. The Y-axis in the figure is the success rate of the agent (measured in terms of number of dialogs that resulted in launching a skill divided by total number of dialogs), and the X-axis is the number of learning steps. Given our choice of reward function, the increase in success rate is indicative of the agent learning to improve its policy over time. Furthermore, the RL agent outperformed the rule-based agent with average success rate of $68.00\% (\pm 2\%$) in simulation.
Experimental Results ::: Human Evaluation
To evaluate the performance of the skill discovery agent, we deployed the dialog policies and evaluated them with real users (see Appendix A for examples of dialogs). We first conducted a test with a baseline policy of recommending up to five skills based on popularity and allowing the user to either accept or reject the recommendation. The success rate of this simple policy was $46.42$%, illustrating the importance of providing flexible search to the user. We then conducted an A/B test on the rule-based and RL policies to compare their effects on skill launches in a production environment. Both policies were tested on randomly sampled users, with the additional constraint of using the same policy for returning users. The results are reported in Table TABREF11. Both policies significantly outperform the baseline policy, indicating the importance of providing flexible search and navigation to users. Additionally, the difference between the success rate of the rule-based ($73.41$%) and RL ($76.99$%) policies is statistically significant ($p$-value $<$ $0.0001$) and the RL policy has significantly shorter dialogs ($p$-value $<$ $0.0001$), showing the importance of optimizing for the entire interaction with the user.
In order to understand the effect of adapting to user attributes, we investigated the difference in success rate between first-time and returning users for the two policies. First-time users make up $59.48\%$ and $60.14\%$ of the population for the rule-based and RL policies, respectively. Table TABREF12 shows the results. The RL policy significantly outperforms the rule-based policy for both first-time ($p$-value $<$ $0.0001$) and returning users ($p$-value $= 0.0002$), indicating that the RL model has learned and adapted to user attributes. Additionally, the RL policy has a similar performance for both groups of users. The difference for the rule-based policy between the two groups, on the other hand, is significant ($p$-value $= 0.0010$), indicating that this policy is more tuned to returning users. This highlights the difficulty of authoring personalized dialog policies with rules, and shows the advantage of using RL for this problem.
Related Work
Conversational search and recommendation, especially in the context of e-commerce, have been explored by researchers BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32. BIBREF28 introduced an interactive recommendation protocol and studied whether to ask absolute or relative questions when gathering user preferences. Their dialog system collects like/dislike and pairwise comparison feedback from users, and does not include actions typically present in a dialog system BIBREF32. BIBREF29 proposed a theoretical framework for conversational search. BIBREF33 framed the problem as a machine reading task and applied it to question answering. BIBREF30 developed a RL-based conversational search assistant, in which state and action spaces are domain specific and may require a significant amount of time to develop. BIBREF32 proposed a unified framework to integrate recommender and dialog systems, in which instead of just returning the top-ranking results for a given user query, the agent attempts to optimize for long term reward by asking the user for the value of an attribute. In their work, the action space is limited to two types of actions, namely, requesting for the value of an attribute or making a recommendation. BIBREF31 proposed a multi-memory network architecture and applied it to search and recommendation in e-commerce. Compared to previous works, our formulation of the search problem is domain independent, accounts for user attributes and conversational preferences, and includes actions typically present in a dialog system. Additionally, whereas existing works have not been evaluated in a real production setting, we conduct experiments with real users at large scale.
Conclusion
In this paper, we introduced the problem of conversational skill discovery in speech-based assistants and presented an approach to enable users to find skills. To this end, we started with a rule-based agent and improved it by using RL, enabling the agent to adapt to different user attributes and conversational styles. We compared popularity based, rule-based and RL-based model conversational agents by deploying them in a real production setting and showed that the RL agent learns to adapt its policy to achieve a higher success rate with shorter dialogs. For future work, we plan to further personalize the dialog policy based on user attributes and conversational preferences, and investigate richer state representations. Furthermore, we plan to explore the impact of evolving attributes and preferences on the learned policies.
Conclusion ::: Acknowledgments
We would like to thank the Alexa Machine Learning Platform team for making the customer experiments possible. We would also like to thank Jared Casale, Jason Pazis, Longshaokan Wang, and Spyros Matsoukas for their feedback and support.
Appendix ::: Examples of Dialogs
Dialog with the Rule-based Agent
Dialog with RL-based Agent | number of dialogs that resulted in launching a skill divided by total number of dialogs |
83f14af3ccca4ab9deb4c6d208f624d1e79dc7eb | 83f14af3ccca4ab9deb4c6d208f624d1e79dc7eb_0 | Q: Which of the two ensembles yields the best performance?
Text: Introduction
Imagine that you have a friend who claims to know a lot of trivia. During a quiz, you ask them about the native language of actor Jean Marais. They correctly answer French. For a moment you are impressed, until you realize that Jean is a typical French name. So you ask the same question about Daniel Ceccaldi (another French actor, but with an Italian-sounding name). This time your friend says “Italian, I guess.” If this were a Question Answering (QA) benchmark, your friend would have achieved a respectable accuracy of 50%. Yet, their performance does not indicate factual knowledge about the native languages of actors. Rather, it shows that they are able to reason about the likely origins of peoples' names (see Table TABREF1 for more examples).
BIBREF1 argue that the unsupervised BERT LM BIBREF0 memorizes factual knowledge about entities and relations. They base this statement on the unsupervised QA benchmark LAMA (§SECREF2), where BERT rivals a knowledge base (KB) built by relation extraction. They suggest that BERT and similar LMs could become a “viable alternative to traditional knowledge bases extracted from text”. We argue that the impressive performance of BERT is partly due to reasoning about (the surface form of) entity names. In §SECREF4, we construct LAMA-UHN (UnHelpful Names), a more “factual” subset of LAMA-Google-RE and LAMA-T-REx, by filtering out queries that are easy to answer from entity names alone. We show that the performance of BERT decreases dramatically on LAMA-UHN.
In §SECREF3, we propose E-BERT, a simple mapping-based extension of BERT that replaces entity mentions with wikipedia2vec entity embeddings BIBREF3. In §SECREF4, we show that E-BERT rivals BERT and the recently proposed entity-enhanced ERNIE model BIBREF2 on LAMA. E-BERT has a substantial lead over both baselines on LAMA-UHN; furthermore, ensembles of E-BERT and BERT outperform all baselines on original LAMA.
LAMA
The LAMA (LAnguage Model Analysis) benchmark BIBREF1 is supposed to probe for “factual and commonsense knowledge” inherent in LMs. In this paper, we focus on LAMA-Google-RE and LAMA-T-REx BIBREF5, which are aimed at factual knowledge. Contrary to most previous works on QA, LAMA tests LMs as-is, without supervised finetuning.
The LAMA probing task follows this schema: Given a KB triple of the form (S, R, O), the object is elicited with a relation-specific cloze-style question, e.g., (Jean_Marais, native-language, French) becomes: “The native language of Jean Marais is [MASK].” The LM predicts a distribution over a limited vocabulary to replace [MASK], which is evaluated against the known gold answer.
LAMA ::: LAMA-UHN
It is often possible to guess properties of an entity from its name, with zero factual knowledge of the entity itself. This is because entities are often named according to implicit or explicit rules (e.g., the cultural norms involved in naming a child, copyright laws for industrial products, or simply a practical need for descriptive names). LAMA makes guessing even easier by its limited vocabulary, which may only contain a few candidates for a particular entity type.
We argue that a QA benchmark that does not control for entity names does not assess whether an LM is good at reasoning about names, good at memorizing facts, or both. In this Section, we describe the creation of LAMA-UHN (UnHelpfulNames), a subset of LAMA-Google-RE and LAMA-T-REx.
Filter 1: The string match filter deletes all KB triples where the correct answer (e.g., Apple) is a case-insensitive substring of the subject entity name (e.g., Apple Watch). This simple heuristic deletes up to 81% of triples from individual relations (see Appendix for statistics and examples).
Filter 2: Of course, entity names can be revealing in ways that are more subtle. As illustrated by our French actor example, a person's name can be a useful prior for guessing their native language and by extension, their nationality, place of birth, etc. Our person name filter uses cloze-style questions to elicit name associations inherent in BERT, and deletes KB triples that correlate with them. Consider our previous example (Jean_Marais, native-language, French). We whitespace-tokenize the subject name into Jean and Marais. If BERT considers either name to be a common French name, then a correct answer is insufficient evidence for factual knowledge about the entity Jean_Marais. On the other hand, if neither Jean nor Marais are considered French, but a correct answer is given nonetheless, then we consider this sufficient evidence for factual knowledge.
We query BERT for answers to “[X] is a common name in the following language: [MASK].” for both [X] = Jean and [X] = Marais. If the correct answer is among the top-3 for either query, we delete the triple. We apply this filter to Google-RE:place-of-birth, Google-RE:place-of-death, T-REx:P19 (place of birth), T-REx:P20 (place of death), T-REx:P27 (nationality), T-REx:P103 (native language) and T-REx:P1412 (language used). See Appendix for statistics. Depending on the relation, we replace “language” with “city” or “country” in the template.
Figure FIGREF5 (blue bars) shows that BERT is strongly affected by filtering, with a drop of 5%–10% mean P@1 from original LAMA to LAMA-UHN. This suggests that BERT does well on LAMA partly because it reasons about (the surface form of) entity names. Of course, name-based reasoning is a useful ability in its own right; however, conflating it with factual knowledge may be misleading.
E-BERT ::: BERT.
BERT BIBREF0 is a deep bidirectional transformer encoder BIBREF6 pretrained on unlabeled text. It segments text into subword tokens from a vocabulary $\mathbb {L}_b$. During training, some tokens are masked by a special [MASK] token. Tokens are embedded into real-valued vectors by an embedding function $\mathcal {E}_\mathcal {B} : \mathbb {L}_b \rightarrow \mathbb {R}^{d_\mathcal {B}}$. The embedded tokens are contextualized by the BERT encoder $\mathcal {B}$ and the output of $\mathcal {B}$ is fed into a function $\mathcal {M}_\mathcal {B}: \mathbb {R}^{d_\mathcal {B}} \rightarrow \mathbb {L}_b$ that predicts the identity of masked tokens. BERT can thus be used as an LM.
E-BERT ::: Wikipedia2vec.
Wikipedia2vec BIBREF3 embeds words and wikipedia pages ($\approx $ entities) in a common space. It learns an embedding function for a vocabulary of words $\mathbb {L}_w$ and a set of entities $\mathbb {L}_e$. We denote this function as $\mathcal {F}: \mathbb {L}_w \cup \mathbb {L}_e \rightarrow \mathbb {R}^{d_\mathcal {F}}$. The wikipedia2vec loss has three components: (a) skipgram word2vec BIBREF7 operating on $\mathbb {L}_w$ (b) a graph loss on the wikipedia link graph on $\mathbb {L}_e$ (c) a version of word2vec where words are predicted from entity mentions. Loss (c) ensures that word and entity embeddings share a space. Figure FIGREF5 (black horizontal bars) shows that loss (b) is vital for our use case.
E-BERT ::: E-BERT.
We want to transform the output space of $\mathcal {F}$ in such a way that $\mathcal {B}$ is fooled into accepting entity embeddings in lieu of its native subword embeddings. We approximate this goal by minimizing the squared distance of transformed wikipedia2vec word vectors and BERT subword vectors:
where $\mathcal {W}$ is a linear projection obtained by least squares. Since $\mathcal {F}$ embeds $\mathbb {L}_w$ and $\mathbb {L}_e$ into the same space, $\mathcal {W}$ is applicable to members of $\mathbb {L}_e$, even though it was learned on members of $\mathbb {L}_w$.
Recall that BERT segments text into subwords, e.g., our previous example is tokenized as: The native language of Jean Mara ##is is [MASK] .
E-BERT replaces the subwords that correspond to the entity mention with the symbolic entity: The native language of Jean_Marais is [MASK] .
The entity (truetype) is embedded by $\mathcal {W} \circ \mathcal {F}$, while other tokens (italics) continue to be embedded by $\mathcal {E}_\mathcal {B}$. The altered embedding sequence is fed into $\mathcal {B}$, where it is treated like any other embedding sequence. Neither $\mathcal {B}$ nor $\mathcal {M}_\mathcal {B}$ are changed.
We ensemble BERT and E-BERT by (a) mean-pooling their outputs (AVG) or (b) concatenating the entity and its name with a slash symbol (CONCAT), e.g.: Jean_Marais / Jean Mara ##is.
Experiments ::: Systems.
We train cased wikipedia2vec on a recent wikipedia dump (2019-09-02), setting $d_\mathcal {F} = d_\mathcal {B}$. To learn $\mathcal {W}$, we intersect the wikipedia2vec word vocabulary with the cased BERT vocabulary.
Our primary baselines are BERT$_\mathrm {base}$ and BERT$_\mathrm {large}$ as evaluated in BIBREF1. We also test ERNIE BIBREF2, a BERT$_\mathrm {base}$ type model that uses wikidata TransE entity embeddings BIBREF8 as additional input. ERNIE has two transformers, one for tokens and one for entities, which are fused by a trainable feed-forward module. To accommodate the new parameters, ERNIE is pre-trained with (a) standard BERT loss and (b) predicting Wikipedia entities.
Note that wikipedia2vec and TransE have low coverage on LAMA-Google-RE (wikipedia2vec: 54%, TransE: 71%). When an entity embedding is missing, we fall back onto original BERT. Coverage of LAMA-T-REx is $>98$% for both systems.
Experiments ::: LAMA.
In keeping with BIBREF1, we report P@k macro-averaged over relations. Macro-averaging ensures that every relation has the same impact on the metric before and after filtering.
Figure FIGREF5 shows that E-BERT performs comparable to BERT and ERNIE on unfiltered LAMA. However, E-BERT is less affected by filtering on LAMA-UHN, suggesting that its performance is more strongly due to factual knowledge. Recall that we lack entity embeddings for 46% of Google-RE subjects, i.e., E-BERT cannot improve over BERT on almost half of the Google-RE tuples.
Figure FIGREF15 plots deltas in mean P@1 on unfiltered LAMA-T-REx relations relative to BERT, along with the frequency of tuples whose object entity name is a substring of the subject entity name – i.e., the ratio of queries that would be deleted by the string match filter. We see that E-BERT losses relative to BERT (negative red bars) are mostly on relations with a high percentage of trivial substring answers. By contrast, E-BERT typically outperforms BERT on relations where such trivial answers are rare. The ensembles are able to mitigate the losses of E-BERT on almost all relations, while keeping most of its gains (purple and orange bars). This suggests that they successfully combine BERT's ability to reason about entity names with E-BERT's enhanced factual knowledge.
Figure FIGREF17 shows that the lead of E-BERT and the ensembles over BERT and ERNIE in terms of mean P@k is especially salient for bigger k.
Experiments ::: FewRel.
We also evaluate on the FewRel relation classification dataset BIBREF9, using the setup and data split from zhang2019ernie (see Appendix for details). Table TABREF19 shows that E-BERT beats BERT, and the ensembles perform comparable to ERNIE despite not having a dedicated entity encoder.
Related work
Factual QA is typically tackled as a supervised problem (e.g., BIBREF10, BIBREF11). In contrast, LAMA BIBREF1 tests for knowledge learned by LMs without supervision; similar experiments were performed by BIBREF12. Their experiments do not differentiate between factual knowledge of LMs and their ability to reason about entity names.
The E-BERT embedding mapping strategy is inspired by cross-lingual embedding mapping on identical strings BIBREF13. A similar method was recently applied by BIBREF14 to map cross-lingual FastText subword vectors BIBREF15 into the multilingual BERT subword embedding space. BIBREF16 mimick BERT subword embeddings for rare English words from their contexts and form.
Other contextualized models that incorporate entity embeddings are ERNIE BIBREF2 (see §SECREF4) and KnowBert BIBREF17. KnowBert is contemporaneous to our work, and at the time of writing, the model was not available for comparison.
Both ERNIE and KnowBert add new parameters to the BERT architecture, which must be integrated by additional pretraining. By contrast, E-BERT works with the unchanged BERT model, and $\mathcal {W}$ has an efficient closed-form solution. This means that we can update E-BERT to the newest wikipedia dump at little computational cost – the most expensive operation would be training wikipedia2vec, which takes a few hours on CPUs.
Conclusion
We have presented evidence that the surprising performance of BERT on the recently published LAMA QA benchmark is partly due to reasoning about entity names rather than factual knowledge. We have constructed more “factual” subsets of LAMA-Google-RE and LAMA-T-REx by filtering out easy-to-guess queries. The resulting benchmark, LAMA-UHN, is more difficult for BERT.
As a remedy, we proposed E-BERT, a simple extension of BERT that injects wikipedia2vec entity embeddings into BERT. E-BERT outperforms BERT and ERNIE on LAMA-UHN, which we take as evidence that E-BERT is richer in factual knowledge. Additionally, ensembling yields improvements over both BERT and E-BERT on unfiltered LAMA and on the FewRel relation classification dataset.
FewRel training
We use the sentence classification setup from BIBREF2. We mark subjects and objects with the symbols # and $, i.e., the inputs to BERT, E-BERT and the CONCAT ensemble look as follows:
[CLS] $ Tang ##ier $ ' s # Ibn Bat ##to ##uta Airport # is the busiest airport in the region . [SEP]
[CLS] $ Tangier $ ' s # Tangier_Ibn_Battouta_Airport # is the busiest airport in the region . [SEP]
[CLS] $ Tangier / Tang ##ier $ ' s # Tangier_Ibn_Battouta_Airport / Ibn Bat ##to ##uta Airport # is the busiest airport in the region . [SEP]
where entities (in truetype) are embedded by $\mathcal {W} \circ \mathcal {F}$ and all other tokens (in italics) are embedded by $\mathcal {E}_\mathcal {B}$. Note that entity IDs are provided by FewRel. If we lack an entity embedding, we fall back onto the standard BERT segmentation.
To predict the relation, we feed the contextualized embedding of the [CLS] token into a linear classifier. During training we finetune all network parameters except for the embeddings. For hyperparameter tuning, we use the ranges from BIBREF2 except for the number of epochs, which we fix at 10. The AVG ensemble averages over BERT's and E-BERT's output distributions. Experiments were run on two GeForce GTX 1080 Ti GPUs with data-parallel training.
A note on casing
The cased BERT vocabulary is a superset of the LAMA vocabulary. This ensures that BERT can in principle answer all LAMA queries correctly. The uncased ERNIE vocabulary does not have this property. For ERNIE, we therefore lowercase all queries and restrict the model output to the intersection of its vocabulary with the lowercased LAMA vocabulary. As a result, ERNIE selects an answer from $\sim $18K candidates (instead of the standard $\sim $21K), which should work in its favor. We verify that all lowercased object names from LAMA-T-REx and LAMA-Google-RE appear in ERNIE's vocabulary, i.e., ERNIE is in principle able to answer all lowercased queries correctly. | Answer with content missing: (Table 2) CONCAT ensemble |
0154d8be772193bfd70194110f125813057413a4 | 0154d8be772193bfd70194110f125813057413a4_0 | Q: What are the two ways of ensembling BERT and E-BERT?
Text: Introduction
Imagine that you have a friend who claims to know a lot of trivia. During a quiz, you ask them about the native language of actor Jean Marais. They correctly answer French. For a moment you are impressed, until you realize that Jean is a typical French name. So you ask the same question about Daniel Ceccaldi (another French actor, but with an Italian-sounding name). This time your friend says “Italian, I guess.” If this were a Question Answering (QA) benchmark, your friend would have achieved a respectable accuracy of 50%. Yet, their performance does not indicate factual knowledge about the native languages of actors. Rather, it shows that they are able to reason about the likely origins of peoples' names (see Table TABREF1 for more examples).
BIBREF1 argue that the unsupervised BERT LM BIBREF0 memorizes factual knowledge about entities and relations. They base this statement on the unsupervised QA benchmark LAMA (§SECREF2), where BERT rivals a knowledge base (KB) built by relation extraction. They suggest that BERT and similar LMs could become a “viable alternative to traditional knowledge bases extracted from text”. We argue that the impressive performance of BERT is partly due to reasoning about (the surface form of) entity names. In §SECREF4, we construct LAMA-UHN (UnHelpful Names), a more “factual” subset of LAMA-Google-RE and LAMA-T-REx, by filtering out queries that are easy to answer from entity names alone. We show that the performance of BERT decreases dramatically on LAMA-UHN.
In §SECREF3, we propose E-BERT, a simple mapping-based extension of BERT that replaces entity mentions with wikipedia2vec entity embeddings BIBREF3. In §SECREF4, we show that E-BERT rivals BERT and the recently proposed entity-enhanced ERNIE model BIBREF2 on LAMA. E-BERT has a substantial lead over both baselines on LAMA-UHN; furthermore, ensembles of E-BERT and BERT outperform all baselines on original LAMA.
LAMA
The LAMA (LAnguage Model Analysis) benchmark BIBREF1 is supposed to probe for “factual and commonsense knowledge” inherent in LMs. In this paper, we focus on LAMA-Google-RE and LAMA-T-REx BIBREF5, which are aimed at factual knowledge. Contrary to most previous works on QA, LAMA tests LMs as-is, without supervised finetuning.
The LAMA probing task follows this schema: Given a KB triple of the form (S, R, O), the object is elicited with a relation-specific cloze-style question, e.g., (Jean_Marais, native-language, French) becomes: “The native language of Jean Marais is [MASK].” The LM predicts a distribution over a limited vocabulary to replace [MASK], which is evaluated against the known gold answer.
LAMA ::: LAMA-UHN
It is often possible to guess properties of an entity from its name, with zero factual knowledge of the entity itself. This is because entities are often named according to implicit or explicit rules (e.g., the cultural norms involved in naming a child, copyright laws for industrial products, or simply a practical need for descriptive names). LAMA makes guessing even easier by its limited vocabulary, which may only contain a few candidates for a particular entity type.
We argue that a QA benchmark that does not control for entity names does not assess whether an LM is good at reasoning about names, good at memorizing facts, or both. In this Section, we describe the creation of LAMA-UHN (UnHelpfulNames), a subset of LAMA-Google-RE and LAMA-T-REx.
Filter 1: The string match filter deletes all KB triples where the correct answer (e.g., Apple) is a case-insensitive substring of the subject entity name (e.g., Apple Watch). This simple heuristic deletes up to 81% of triples from individual relations (see Appendix for statistics and examples).
Filter 2: Of course, entity names can be revealing in ways that are more subtle. As illustrated by our French actor example, a person's name can be a useful prior for guessing their native language and by extension, their nationality, place of birth, etc. Our person name filter uses cloze-style questions to elicit name associations inherent in BERT, and deletes KB triples that correlate with them. Consider our previous example (Jean_Marais, native-language, French). We whitespace-tokenize the subject name into Jean and Marais. If BERT considers either name to be a common French name, then a correct answer is insufficient evidence for factual knowledge about the entity Jean_Marais. On the other hand, if neither Jean nor Marais are considered French, but a correct answer is given nonetheless, then we consider this sufficient evidence for factual knowledge.
We query BERT for answers to “[X] is a common name in the following language: [MASK].” for both [X] = Jean and [X] = Marais. If the correct answer is among the top-3 for either query, we delete the triple. We apply this filter to Google-RE:place-of-birth, Google-RE:place-of-death, T-REx:P19 (place of birth), T-REx:P20 (place of death), T-REx:P27 (nationality), T-REx:P103 (native language) and T-REx:P1412 (language used). See Appendix for statistics. Depending on the relation, we replace “language” with “city” or “country” in the template.
Figure FIGREF5 (blue bars) shows that BERT is strongly affected by filtering, with a drop of 5%–10% mean P@1 from original LAMA to LAMA-UHN. This suggests that BERT does well on LAMA partly because it reasons about (the surface form of) entity names. Of course, name-based reasoning is a useful ability in its own right; however, conflating it with factual knowledge may be misleading.
E-BERT ::: BERT.
BERT BIBREF0 is a deep bidirectional transformer encoder BIBREF6 pretrained on unlabeled text. It segments text into subword tokens from a vocabulary $\mathbb {L}_b$. During training, some tokens are masked by a special [MASK] token. Tokens are embedded into real-valued vectors by an embedding function $\mathcal {E}_\mathcal {B} : \mathbb {L}_b \rightarrow \mathbb {R}^{d_\mathcal {B}}$. The embedded tokens are contextualized by the BERT encoder $\mathcal {B}$ and the output of $\mathcal {B}$ is fed into a function $\mathcal {M}_\mathcal {B}: \mathbb {R}^{d_\mathcal {B}} \rightarrow \mathbb {L}_b$ that predicts the identity of masked tokens. BERT can thus be used as an LM.
E-BERT ::: Wikipedia2vec.
Wikipedia2vec BIBREF3 embeds words and wikipedia pages ($\approx $ entities) in a common space. It learns an embedding function for a vocabulary of words $\mathbb {L}_w$ and a set of entities $\mathbb {L}_e$. We denote this function as $\mathcal {F}: \mathbb {L}_w \cup \mathbb {L}_e \rightarrow \mathbb {R}^{d_\mathcal {F}}$. The wikipedia2vec loss has three components: (a) skipgram word2vec BIBREF7 operating on $\mathbb {L}_w$ (b) a graph loss on the wikipedia link graph on $\mathbb {L}_e$ (c) a version of word2vec where words are predicted from entity mentions. Loss (c) ensures that word and entity embeddings share a space. Figure FIGREF5 (black horizontal bars) shows that loss (b) is vital for our use case.
E-BERT ::: E-BERT.
We want to transform the output space of $\mathcal {F}$ in such a way that $\mathcal {B}$ is fooled into accepting entity embeddings in lieu of its native subword embeddings. We approximate this goal by minimizing the squared distance of transformed wikipedia2vec word vectors and BERT subword vectors:
where $\mathcal {W}$ is a linear projection obtained by least squares. Since $\mathcal {F}$ embeds $\mathbb {L}_w$ and $\mathbb {L}_e$ into the same space, $\mathcal {W}$ is applicable to members of $\mathbb {L}_e$, even though it was learned on members of $\mathbb {L}_w$.
Recall that BERT segments text into subwords, e.g., our previous example is tokenized as: The native language of Jean Mara ##is is [MASK] .
E-BERT replaces the subwords that correspond to the entity mention with the symbolic entity: The native language of Jean_Marais is [MASK] .
The entity (truetype) is embedded by $\mathcal {W} \circ \mathcal {F}$, while other tokens (italics) continue to be embedded by $\mathcal {E}_\mathcal {B}$. The altered embedding sequence is fed into $\mathcal {B}$, where it is treated like any other embedding sequence. Neither $\mathcal {B}$ nor $\mathcal {M}_\mathcal {B}$ are changed.
We ensemble BERT and E-BERT by (a) mean-pooling their outputs (AVG) or (b) concatenating the entity and its name with a slash symbol (CONCAT), e.g.: Jean_Marais / Jean Mara ##is.
Experiments ::: Systems.
We train cased wikipedia2vec on a recent wikipedia dump (2019-09-02), setting $d_\mathcal {F} = d_\mathcal {B}$. To learn $\mathcal {W}$, we intersect the wikipedia2vec word vocabulary with the cased BERT vocabulary.
Our primary baselines are BERT$_\mathrm {base}$ and BERT$_\mathrm {large}$ as evaluated in BIBREF1. We also test ERNIE BIBREF2, a BERT$_\mathrm {base}$ type model that uses wikidata TransE entity embeddings BIBREF8 as additional input. ERNIE has two transformers, one for tokens and one for entities, which are fused by a trainable feed-forward module. To accommodate the new parameters, ERNIE is pre-trained with (a) standard BERT loss and (b) predicting Wikipedia entities.
Note that wikipedia2vec and TransE have low coverage on LAMA-Google-RE (wikipedia2vec: 54%, TransE: 71%). When an entity embedding is missing, we fall back onto original BERT. Coverage of LAMA-T-REx is $>98$% for both systems.
Experiments ::: LAMA.
In keeping with BIBREF1, we report P@k macro-averaged over relations. Macro-averaging ensures that every relation has the same impact on the metric before and after filtering.
Figure FIGREF5 shows that E-BERT performs comparable to BERT and ERNIE on unfiltered LAMA. However, E-BERT is less affected by filtering on LAMA-UHN, suggesting that its performance is more strongly due to factual knowledge. Recall that we lack entity embeddings for 46% of Google-RE subjects, i.e., E-BERT cannot improve over BERT on almost half of the Google-RE tuples.
Figure FIGREF15 plots deltas in mean P@1 on unfiltered LAMA-T-REx relations relative to BERT, along with the frequency of tuples whose object entity name is a substring of the subject entity name – i.e., the ratio of queries that would be deleted by the string match filter. We see that E-BERT losses relative to BERT (negative red bars) are mostly on relations with a high percentage of trivial substring answers. By contrast, E-BERT typically outperforms BERT on relations where such trivial answers are rare. The ensembles are able to mitigate the losses of E-BERT on almost all relations, while keeping most of its gains (purple and orange bars). This suggests that they successfully combine BERT's ability to reason about entity names with E-BERT's enhanced factual knowledge.
Figure FIGREF17 shows that the lead of E-BERT and the ensembles over BERT and ERNIE in terms of mean P@k is especially salient for bigger k.
Experiments ::: FewRel.
We also evaluate on the FewRel relation classification dataset BIBREF9, using the setup and data split from zhang2019ernie (see Appendix for details). Table TABREF19 shows that E-BERT beats BERT, and the ensembles perform comparable to ERNIE despite not having a dedicated entity encoder.
Related work
Factual QA is typically tackled as a supervised problem (e.g., BIBREF10, BIBREF11). In contrast, LAMA BIBREF1 tests for knowledge learned by LMs without supervision; similar experiments were performed by BIBREF12. Their experiments do not differentiate between factual knowledge of LMs and their ability to reason about entity names.
The E-BERT embedding mapping strategy is inspired by cross-lingual embedding mapping on identical strings BIBREF13. A similar method was recently applied by BIBREF14 to map cross-lingual FastText subword vectors BIBREF15 into the multilingual BERT subword embedding space. BIBREF16 mimick BERT subword embeddings for rare English words from their contexts and form.
Other contextualized models that incorporate entity embeddings are ERNIE BIBREF2 (see §SECREF4) and KnowBert BIBREF17. KnowBert is contemporaneous to our work, and at the time of writing, the model was not available for comparison.
Both ERNIE and KnowBert add new parameters to the BERT architecture, which must be integrated by additional pretraining. By contrast, E-BERT works with the unchanged BERT model, and $\mathcal {W}$ has an efficient closed-form solution. This means that we can update E-BERT to the newest wikipedia dump at little computational cost – the most expensive operation would be training wikipedia2vec, which takes a few hours on CPUs.
Conclusion
We have presented evidence that the surprising performance of BERT on the recently published LAMA QA benchmark is partly due to reasoning about entity names rather than factual knowledge. We have constructed more “factual” subsets of LAMA-Google-RE and LAMA-T-REx by filtering out easy-to-guess queries. The resulting benchmark, LAMA-UHN, is more difficult for BERT.
As a remedy, we proposed E-BERT, a simple extension of BERT that injects wikipedia2vec entity embeddings into BERT. E-BERT outperforms BERT and ERNIE on LAMA-UHN, which we take as evidence that E-BERT is richer in factual knowledge. Additionally, ensembling yields improvements over both BERT and E-BERT on unfiltered LAMA and on the FewRel relation classification dataset.
FewRel training
We use the sentence classification setup from BIBREF2. We mark subjects and objects with the symbols # and $, i.e., the inputs to BERT, E-BERT and the CONCAT ensemble look as follows:
[CLS] $ Tang ##ier $ ' s # Ibn Bat ##to ##uta Airport # is the busiest airport in the region . [SEP]
[CLS] $ Tangier $ ' s # Tangier_Ibn_Battouta_Airport # is the busiest airport in the region . [SEP]
[CLS] $ Tangier / Tang ##ier $ ' s # Tangier_Ibn_Battouta_Airport / Ibn Bat ##to ##uta Airport # is the busiest airport in the region . [SEP]
where entities (in truetype) are embedded by $\mathcal {W} \circ \mathcal {F}$ and all other tokens (in italics) are embedded by $\mathcal {E}_\mathcal {B}$. Note that entity IDs are provided by FewRel. If we lack an entity embedding, we fall back onto the standard BERT segmentation.
To predict the relation, we feed the contextualized embedding of the [CLS] token into a linear classifier. During training we finetune all network parameters except for the embeddings. For hyperparameter tuning, we use the ranges from BIBREF2 except for the number of epochs, which we fix at 10. The AVG ensemble averages over BERT's and E-BERT's output distributions. Experiments were run on two GeForce GTX 1080 Ti GPUs with data-parallel training.
A note on casing
The cased BERT vocabulary is a superset of the LAMA vocabulary. This ensures that BERT can in principle answer all LAMA queries correctly. The uncased ERNIE vocabulary does not have this property. For ERNIE, we therefore lowercase all queries and restrict the model output to the intersection of its vocabulary with the lowercased LAMA vocabulary. As a result, ERNIE selects an answer from $\sim $18K candidates (instead of the standard $\sim $21K), which should work in its favor. We verify that all lowercased object names from LAMA-T-REx and LAMA-Google-RE appear in ERNIE's vocabulary, i.e., ERNIE is in principle able to answer all lowercased queries correctly. | mean-pooling their outputs (AVG), concatenating the entity and its name with a slash symbol (CONCAT) |
e737cfe0f6cfc6d3ac6bec32231d9c893bfc3fc9 | e737cfe0f6cfc6d3ac6bec32231d9c893bfc3fc9_0 | Q: How is it determined that a fact is easy-to-guess?
Text: Introduction
Imagine that you have a friend who claims to know a lot of trivia. During a quiz, you ask them about the native language of actor Jean Marais. They correctly answer French. For a moment you are impressed, until you realize that Jean is a typical French name. So you ask the same question about Daniel Ceccaldi (another French actor, but with an Italian-sounding name). This time your friend says “Italian, I guess.” If this were a Question Answering (QA) benchmark, your friend would have achieved a respectable accuracy of 50%. Yet, their performance does not indicate factual knowledge about the native languages of actors. Rather, it shows that they are able to reason about the likely origins of peoples' names (see Table TABREF1 for more examples).
BIBREF1 argue that the unsupervised BERT LM BIBREF0 memorizes factual knowledge about entities and relations. They base this statement on the unsupervised QA benchmark LAMA (§SECREF2), where BERT rivals a knowledge base (KB) built by relation extraction. They suggest that BERT and similar LMs could become a “viable alternative to traditional knowledge bases extracted from text”. We argue that the impressive performance of BERT is partly due to reasoning about (the surface form of) entity names. In §SECREF4, we construct LAMA-UHN (UnHelpful Names), a more “factual” subset of LAMA-Google-RE and LAMA-T-REx, by filtering out queries that are easy to answer from entity names alone. We show that the performance of BERT decreases dramatically on LAMA-UHN.
In §SECREF3, we propose E-BERT, a simple mapping-based extension of BERT that replaces entity mentions with wikipedia2vec entity embeddings BIBREF3. In §SECREF4, we show that E-BERT rivals BERT and the recently proposed entity-enhanced ERNIE model BIBREF2 on LAMA. E-BERT has a substantial lead over both baselines on LAMA-UHN; furthermore, ensembles of E-BERT and BERT outperform all baselines on original LAMA.
LAMA
The LAMA (LAnguage Model Analysis) benchmark BIBREF1 is supposed to probe for “factual and commonsense knowledge” inherent in LMs. In this paper, we focus on LAMA-Google-RE and LAMA-T-REx BIBREF5, which are aimed at factual knowledge. Contrary to most previous works on QA, LAMA tests LMs as-is, without supervised finetuning.
The LAMA probing task follows this schema: Given a KB triple of the form (S, R, O), the object is elicited with a relation-specific cloze-style question, e.g., (Jean_Marais, native-language, French) becomes: “The native language of Jean Marais is [MASK].” The LM predicts a distribution over a limited vocabulary to replace [MASK], which is evaluated against the known gold answer.
LAMA ::: LAMA-UHN
It is often possible to guess properties of an entity from its name, with zero factual knowledge of the entity itself. This is because entities are often named according to implicit or explicit rules (e.g., the cultural norms involved in naming a child, copyright laws for industrial products, or simply a practical need for descriptive names). LAMA makes guessing even easier by its limited vocabulary, which may only contain a few candidates for a particular entity type.
We argue that a QA benchmark that does not control for entity names does not assess whether an LM is good at reasoning about names, good at memorizing facts, or both. In this Section, we describe the creation of LAMA-UHN (UnHelpfulNames), a subset of LAMA-Google-RE and LAMA-T-REx.
Filter 1: The string match filter deletes all KB triples where the correct answer (e.g., Apple) is a case-insensitive substring of the subject entity name (e.g., Apple Watch). This simple heuristic deletes up to 81% of triples from individual relations (see Appendix for statistics and examples).
Filter 2: Of course, entity names can be revealing in ways that are more subtle. As illustrated by our French actor example, a person's name can be a useful prior for guessing their native language and by extension, their nationality, place of birth, etc. Our person name filter uses cloze-style questions to elicit name associations inherent in BERT, and deletes KB triples that correlate with them. Consider our previous example (Jean_Marais, native-language, French). We whitespace-tokenize the subject name into Jean and Marais. If BERT considers either name to be a common French name, then a correct answer is insufficient evidence for factual knowledge about the entity Jean_Marais. On the other hand, if neither Jean nor Marais are considered French, but a correct answer is given nonetheless, then we consider this sufficient evidence for factual knowledge.
We query BERT for answers to “[X] is a common name in the following language: [MASK].” for both [X] = Jean and [X] = Marais. If the correct answer is among the top-3 for either query, we delete the triple. We apply this filter to Google-RE:place-of-birth, Google-RE:place-of-death, T-REx:P19 (place of birth), T-REx:P20 (place of death), T-REx:P27 (nationality), T-REx:P103 (native language) and T-REx:P1412 (language used). See Appendix for statistics. Depending on the relation, we replace “language” with “city” or “country” in the template.
Figure FIGREF5 (blue bars) shows that BERT is strongly affected by filtering, with a drop of 5%–10% mean P@1 from original LAMA to LAMA-UHN. This suggests that BERT does well on LAMA partly because it reasons about (the surface form of) entity names. Of course, name-based reasoning is a useful ability in its own right; however, conflating it with factual knowledge may be misleading.
E-BERT ::: BERT.
BERT BIBREF0 is a deep bidirectional transformer encoder BIBREF6 pretrained on unlabeled text. It segments text into subword tokens from a vocabulary $\mathbb {L}_b$. During training, some tokens are masked by a special [MASK] token. Tokens are embedded into real-valued vectors by an embedding function $\mathcal {E}_\mathcal {B} : \mathbb {L}_b \rightarrow \mathbb {R}^{d_\mathcal {B}}$. The embedded tokens are contextualized by the BERT encoder $\mathcal {B}$ and the output of $\mathcal {B}$ is fed into a function $\mathcal {M}_\mathcal {B}: \mathbb {R}^{d_\mathcal {B}} \rightarrow \mathbb {L}_b$ that predicts the identity of masked tokens. BERT can thus be used as an LM.
E-BERT ::: Wikipedia2vec.
Wikipedia2vec BIBREF3 embeds words and wikipedia pages ($\approx $ entities) in a common space. It learns an embedding function for a vocabulary of words $\mathbb {L}_w$ and a set of entities $\mathbb {L}_e$. We denote this function as $\mathcal {F}: \mathbb {L}_w \cup \mathbb {L}_e \rightarrow \mathbb {R}^{d_\mathcal {F}}$. The wikipedia2vec loss has three components: (a) skipgram word2vec BIBREF7 operating on $\mathbb {L}_w$ (b) a graph loss on the wikipedia link graph on $\mathbb {L}_e$ (c) a version of word2vec where words are predicted from entity mentions. Loss (c) ensures that word and entity embeddings share a space. Figure FIGREF5 (black horizontal bars) shows that loss (b) is vital for our use case.
E-BERT ::: E-BERT.
We want to transform the output space of $\mathcal {F}$ in such a way that $\mathcal {B}$ is fooled into accepting entity embeddings in lieu of its native subword embeddings. We approximate this goal by minimizing the squared distance of transformed wikipedia2vec word vectors and BERT subword vectors:
where $\mathcal {W}$ is a linear projection obtained by least squares. Since $\mathcal {F}$ embeds $\mathbb {L}_w$ and $\mathbb {L}_e$ into the same space, $\mathcal {W}$ is applicable to members of $\mathbb {L}_e$, even though it was learned on members of $\mathbb {L}_w$.
Recall that BERT segments text into subwords, e.g., our previous example is tokenized as: The native language of Jean Mara ##is is [MASK] .
E-BERT replaces the subwords that correspond to the entity mention with the symbolic entity: The native language of Jean_Marais is [MASK] .
The entity (truetype) is embedded by $\mathcal {W} \circ \mathcal {F}$, while other tokens (italics) continue to be embedded by $\mathcal {E}_\mathcal {B}$. The altered embedding sequence is fed into $\mathcal {B}$, where it is treated like any other embedding sequence. Neither $\mathcal {B}$ nor $\mathcal {M}_\mathcal {B}$ are changed.
We ensemble BERT and E-BERT by (a) mean-pooling their outputs (AVG) or (b) concatenating the entity and its name with a slash symbol (CONCAT), e.g.: Jean_Marais / Jean Mara ##is.
Experiments ::: Systems.
We train cased wikipedia2vec on a recent wikipedia dump (2019-09-02), setting $d_\mathcal {F} = d_\mathcal {B}$. To learn $\mathcal {W}$, we intersect the wikipedia2vec word vocabulary with the cased BERT vocabulary.
Our primary baselines are BERT$_\mathrm {base}$ and BERT$_\mathrm {large}$ as evaluated in BIBREF1. We also test ERNIE BIBREF2, a BERT$_\mathrm {base}$ type model that uses wikidata TransE entity embeddings BIBREF8 as additional input. ERNIE has two transformers, one for tokens and one for entities, which are fused by a trainable feed-forward module. To accommodate the new parameters, ERNIE is pre-trained with (a) standard BERT loss and (b) predicting Wikipedia entities.
Note that wikipedia2vec and TransE have low coverage on LAMA-Google-RE (wikipedia2vec: 54%, TransE: 71%). When an entity embedding is missing, we fall back onto original BERT. Coverage of LAMA-T-REx is $>98$% for both systems.
Experiments ::: LAMA.
In keeping with BIBREF1, we report P@k macro-averaged over relations. Macro-averaging ensures that every relation has the same impact on the metric before and after filtering.
Figure FIGREF5 shows that E-BERT performs comparable to BERT and ERNIE on unfiltered LAMA. However, E-BERT is less affected by filtering on LAMA-UHN, suggesting that its performance is more strongly due to factual knowledge. Recall that we lack entity embeddings for 46% of Google-RE subjects, i.e., E-BERT cannot improve over BERT on almost half of the Google-RE tuples.
Figure FIGREF15 plots deltas in mean P@1 on unfiltered LAMA-T-REx relations relative to BERT, along with the frequency of tuples whose object entity name is a substring of the subject entity name – i.e., the ratio of queries that would be deleted by the string match filter. We see that E-BERT losses relative to BERT (negative red bars) are mostly on relations with a high percentage of trivial substring answers. By contrast, E-BERT typically outperforms BERT on relations where such trivial answers are rare. The ensembles are able to mitigate the losses of E-BERT on almost all relations, while keeping most of its gains (purple and orange bars). This suggests that they successfully combine BERT's ability to reason about entity names with E-BERT's enhanced factual knowledge.
Figure FIGREF17 shows that the lead of E-BERT and the ensembles over BERT and ERNIE in terms of mean P@k is especially salient for bigger k.
Experiments ::: FewRel.
We also evaluate on the FewRel relation classification dataset BIBREF9, using the setup and data split from zhang2019ernie (see Appendix for details). Table TABREF19 shows that E-BERT beats BERT, and the ensembles perform comparable to ERNIE despite not having a dedicated entity encoder.
Related work
Factual QA is typically tackled as a supervised problem (e.g., BIBREF10, BIBREF11). In contrast, LAMA BIBREF1 tests for knowledge learned by LMs without supervision; similar experiments were performed by BIBREF12. Their experiments do not differentiate between factual knowledge of LMs and their ability to reason about entity names.
The E-BERT embedding mapping strategy is inspired by cross-lingual embedding mapping on identical strings BIBREF13. A similar method was recently applied by BIBREF14 to map cross-lingual FastText subword vectors BIBREF15 into the multilingual BERT subword embedding space. BIBREF16 mimick BERT subword embeddings for rare English words from their contexts and form.
Other contextualized models that incorporate entity embeddings are ERNIE BIBREF2 (see §SECREF4) and KnowBert BIBREF17. KnowBert is contemporaneous to our work, and at the time of writing, the model was not available for comparison.
Both ERNIE and KnowBert add new parameters to the BERT architecture, which must be integrated by additional pretraining. By contrast, E-BERT works with the unchanged BERT model, and $\mathcal {W}$ has an efficient closed-form solution. This means that we can update E-BERT to the newest wikipedia dump at little computational cost – the most expensive operation would be training wikipedia2vec, which takes a few hours on CPUs.
Conclusion
We have presented evidence that the surprising performance of BERT on the recently published LAMA QA benchmark is partly due to reasoning about entity names rather than factual knowledge. We have constructed more “factual” subsets of LAMA-Google-RE and LAMA-T-REx by filtering out easy-to-guess queries. The resulting benchmark, LAMA-UHN, is more difficult for BERT.
As a remedy, we proposed E-BERT, a simple extension of BERT that injects wikipedia2vec entity embeddings into BERT. E-BERT outperforms BERT and ERNIE on LAMA-UHN, which we take as evidence that E-BERT is richer in factual knowledge. Additionally, ensembling yields improvements over both BERT and E-BERT on unfiltered LAMA and on the FewRel relation classification dataset.
FewRel training
We use the sentence classification setup from BIBREF2. We mark subjects and objects with the symbols # and $, i.e., the inputs to BERT, E-BERT and the CONCAT ensemble look as follows:
[CLS] $ Tang ##ier $ ' s # Ibn Bat ##to ##uta Airport # is the busiest airport in the region . [SEP]
[CLS] $ Tangier $ ' s # Tangier_Ibn_Battouta_Airport # is the busiest airport in the region . [SEP]
[CLS] $ Tangier / Tang ##ier $ ' s # Tangier_Ibn_Battouta_Airport / Ibn Bat ##to ##uta Airport # is the busiest airport in the region . [SEP]
where entities (in truetype) are embedded by $\mathcal {W} \circ \mathcal {F}$ and all other tokens (in italics) are embedded by $\mathcal {E}_\mathcal {B}$. Note that entity IDs are provided by FewRel. If we lack an entity embedding, we fall back onto the standard BERT segmentation.
To predict the relation, we feed the contextualized embedding of the [CLS] token into a linear classifier. During training we finetune all network parameters except for the embeddings. For hyperparameter tuning, we use the ranges from BIBREF2 except for the number of epochs, which we fix at 10. The AVG ensemble averages over BERT's and E-BERT's output distributions. Experiments were run on two GeForce GTX 1080 Ti GPUs with data-parallel training.
A note on casing
The cased BERT vocabulary is a superset of the LAMA vocabulary. This ensures that BERT can in principle answer all LAMA queries correctly. The uncased ERNIE vocabulary does not have this property. For ERNIE, we therefore lowercase all queries and restrict the model output to the intersection of its vocabulary with the lowercased LAMA vocabulary. As a result, ERNIE selects an answer from $\sim $18K candidates (instead of the standard $\sim $21K), which should work in its favor. We verify that all lowercased object names from LAMA-T-REx and LAMA-Google-RE appear in ERNIE's vocabulary, i.e., ERNIE is in principle able to answer all lowercased queries correctly. | filter deletes all KB triples where the correct answer (e.g., Apple) is a case-insensitive substring of the subject entity name (e.g., Apple Watch), person name filter uses cloze-style questions to elicit name associations inherent in BERT, and deletes KB triples that correlate with them |
42be49b883eba268e3dbc5c3ff4631442657dcbb | 42be49b883eba268e3dbc5c3ff4631442657dcbb_0 | Q: How is dependency parsing empirically verified?
Text: Introduction
Constituent and dependency are two typical syntactic structure representation forms as shown in Figure FIGREF1, which have been well studied from both linguistic and computational perspective BIBREF0, BIBREF1. In earlier time, linguists and NLP researchers discussed how to encode lexical dependencies in phrase structures, like Tree-adjoining grammar (TAG) BIBREF2 and head-driven phrase structure grammar (HPSG) BIBREF3.
Typical dependency treebanks are usually converted from constituent treebanks, though they may be independently annotated as well for the same languages. Meanwhile, constituent parsing can be accurately converted to dependencies (SD) representation by grammatical rules or machine learning methods BIBREF4, BIBREF5. Such mutual convertibility shows a close relation between constituent and dependency representation for the same sentence. Thus, it is a natural idea to study the relationship between constituent and dependency structures, and the joint learning of constituent and dependency parsing BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14.
For further exploit both strengths of the two representation forms for even better parsing, in this work, we propose a new model that is capable of synchronously parsing constituent and dependency.
Multitask learning (MTL) is a natural solution in neural models for multiple inputs and multiple outputs, which is adopted in this work to decode constituent and dependency in a single model. BIBREF15 indicates that when tasks are sufficiently similar, especially with syntactic nature, MTL would be useful. In contrast to previous work on deep MTL BIBREF16, BIBREF17, our model focuses on more related tasks and benefits from the strong inherent relation. At last, our model is evaluated on two benchmark treebanks for both constituent and dependency parsing. The empirical results show that our parser reaches new state-of-the-art for all parsing tasks.
Our Model
Using an encoder-decoder backbone, our model may be regarded as an extension of the constituent parsing model of BIBREF18 as shown in Figure FIGREF4. The difference is that in our model both constituent and dependency parsing share the same token representation and shared self-attention layers and each has its own individual Self-Attention Layers and subsequent processing layers. Our model includes four modules: token representation, self-attention encoder, constituent and dependency parsing decoder.
Our Model ::: Token Representation
In our model, token representation $x_i$ is composed by character, word and part-of-speech (POS) embeddings. For character-level representation, we explore two types of encoders, CharCNNs BIBREF19, BIBREF20 and CharLSTM BIBREF18, as both types have been verified their effectiveness. For word-level representation, we concatenate randomly initialized and pre-trained word embeddings. We consider two ways to compose the final token representation, summing and concatenation, $x_i$=$x_{char}$+$x_{word}$+$x_{POS}$, $x_i$=[$x_{char}$;$x_{word}$;$x_{POS}$].
Our Model ::: Self-Attention Encoder
The encoder in our model is adapted from BIBREF21 to factor explicit content and position information in the self-attention process BIBREF18. The input matrices $X = [x_1, x_2, \dots , x_n ]$ in which $x_i$ is concatenated with position embedding are transformed by a self-attention encoder. We factor the model between content and position information both in self-attention sub-layer and feed-forward network, whose setting details follow BIBREF18. We also try different numbers of shared self-attention layers in section SECREF15.
Our Model ::: Constituent Parsing Decoder
The score $s(T)$ of the constituent parsing tree $T$ is to sum every scores of span ($i$, $j$) with label $l$,
$ s(T) = \sum _{(i,j,\ell )\in T} s(i, j, \ell ). $
The goal of constituent parser is to find the tree with the highest score: $ \hat{T} = \arg \max _T s(T). $ We use CKY-style algorithm to obtain the tree $\hat{T}$ in $O(n^3)$ time complexity BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26.
This structured prediction problem is handled with satisfying the margin constraint:
$ s(T^*) \ge s(T) + \Delta (T,T^*), $
where $T^*$ denote correct parse tree and $\Delta $ is the Hamming loss on labeled spans with a slight modification during the dynamic programming search. The objective function is the hinge loss,
Our Model ::: Dependency Parsing Decoder
Similar to the constituent case, dependency parsing is to search over all possible trees to find the globally highest scoring tree. We follow BIBREF27 and BIBREF28 to predict a distribution over the possible head for each word and find the globally highest scoring tree conditional on the distribution of each word only during testing.
We use the biaffine attention mechanism BIBREF27 between each word and the candidates of the parent node:
$\alpha _{ij} = h_i^TWg_j + U^Th_i + V^T g_j + b,$
where $h_i$ and $g_i$ are calculated by a distinct one-layer perceptron network.
Dependency parser is to minimize the negative log likelihood of the golden tree $Y$, which is implemented as cross-entropy loss:
$ J_2(\theta ) = - \left(logP_{\theta }(h_i|x_i) +logP_{\theta }(l_i|x_i,h_i)\right), $
where $P_{\theta }(h_i|x_i)$ is the probability of correct parent node $h_i$ for $x_i$, and $P_{\theta }(l_i|x_i,h_i)$ is the probability of the correct dependency label $l_i$ for the child-parent pair $(x_i,h_i)$.
During parsing, we use the first-order Eisner algorithm BIBREF29 to build projective trees.
Our Model ::: Joint training
Our joint model synchronously predicts the dependency tree and the constituent tree over the same input sentence. The output of the self-attention encoder is sent to the different decoder to generate the different parse tree. Thus, the share components for two parsers include token representation layer and self-attention encoder.
We jointly train the constituent and dependency parser for minimizing the overall loss:
$J_{model}(\theta ) = J_1(\theta ) + \lambda J_2(\theta ),$
where $\lambda $ is a hyper-parameter to control the overall loss. The best performance can be achieved when $\lambda $ is set to 1.0, which turns out that both sides are equally important.
Experiments
We evaluate our model on two benchmark treebanks, English Penn Treebank (PTB) and Chinese Penn Treebank (CTB5.1) following standard data splitting BIBREF30, BIBREF31. POS tags are predicted by the Stanford Tagger BIBREF32. For constituent parsing, we use the standard evalb tool to evaluate the F1 score. For dependency parsing, we apply Stanford basic dependencies (SD) representation BIBREF4 converted by the Stanford parser. Following previous work BIBREF27, BIBREF33, we report the results without punctuations for both treebanks.
Experiments ::: Setup
We use the same experimental settings as BIBREF18. For dependency parsing, we employ two 1024-dimensional multilayer perceptrons for learning specific representation and a 1024-dimensional parameter matrix for biaffine attention. We use 100D GloVe BIBREF34 for English and structured-skipgram BIBREF35 embeddings for Chinese.
Experiments ::: Ablation Studies
All experiments in this subsection are running from token representation with summing setting.
Token Representation Different token representation combinations are evaluated in Table TABREF13. We find that CharLSTM performs a little better than CharCNNs. Moreover, POS tags on parsing performance show that predicted POS tags decreases parsing accuracy, especially without word information. If POS tags are replaced by word embeddings, the performance increases. Finally, we apply word and CharLSTM as token representation setting for our full model.
Shared Self-attention Layers As our model providing two outputs from one input, there is a bifurcation setting for how much shared part should be determined. Both constituent and dependency parsers share token representation and 8 self-attention layers at most. Assuming that either parser always takes input information flow through 8 self-attention layers as shown in Figure FIGREF4, then the number of shared self-attention layers varying from 0 to 8 may reflect the shared degree in the model. When the number is set to 0, it indicates only token representation is shared for both parsers trained for the joint loss through each own 8 self-attention layers. When the number is set to less than 8, for example, 6, then it means that both parsers first shared 6 layers from token representation then have individual 2 self-attention layers.
For different numbers of shared layers, the results are in Table TABREF14. We respectively disable the constituent and the dependency parser to obtain a separate learning setting for both parsers in our model. The comparison in Table TABREF14 indicates that even though without any shared self-attention layers, joint training of our model may significantly outperform separate learning mode. At last, the best performance is still obtained from sharing full 8 self-attention layers.
Besides, comparing UAS and LAS to F1 score, dependency parsing is shown more beneficial from our model which has more than 1% gain in UAS and LAS from parsing constituent together.
Experiments ::: Main Results
Tables TABREF17, TABREF18 and TABREF19 compare our model to existing state-of-the-art, in which indicator Separate with our model shows the results of our model learning constituent or dependency parsing separately, (Sum) and (Concat) respectively represent the results with the indicated input token representation setting. On PTB, our model achieves 93.90 F1 score of constituent parsing and 95.91 UAS and 93.86 LAS of dependency parsing. On CTB, our model achieves a new state-of-the-art result on both constituent and dependency parsing. The comparison again suggests that learning jointly in our model is superior to learning separately. In addition, we also augment our model with ELMo BIBREF48 or a larger version of BERT BIBREF49 as the sole token representation to compare with other pre-training models. Since BERT is based on sub-word, we only take the last sub-word vector of the word in the last layer of BERT as our sole token representation $x_i$. Moreover, our single model of BERT achieves competitive performance with other ensemble models.
Conclusions
This paper presents a joint model with the constituent and dependency parsing which achieves new state-of-the-art results on both Chinese and English benchmark treebanks. Our ablation studies show that joint learning of both constituent and dependency is indeed superior to separate learning mode. Also, experiments show that dependency parsing is much more beneficial from knowing the constituent structure. Our parser predicts phrase structure and head-word simultaneously which can be regarded as an effective HPSG BIBREF3 parser. | At last, our model is evaluated on two benchmark treebanks for both constituent and dependency parsing. The empirical results show that our parser reaches new state-of-the-art for all parsing tasks. |
8d4f0815f8a23fe45c298c161fc7a27f3bb0d338 | 8d4f0815f8a23fe45c298c161fc7a27f3bb0d338_0 | Q: How are different network components evaluated?
Text: Introduction
Constituent and dependency are two typical syntactic structure representation forms as shown in Figure FIGREF1, which have been well studied from both linguistic and computational perspective BIBREF0, BIBREF1. In earlier time, linguists and NLP researchers discussed how to encode lexical dependencies in phrase structures, like Tree-adjoining grammar (TAG) BIBREF2 and head-driven phrase structure grammar (HPSG) BIBREF3.
Typical dependency treebanks are usually converted from constituent treebanks, though they may be independently annotated as well for the same languages. Meanwhile, constituent parsing can be accurately converted to dependencies (SD) representation by grammatical rules or machine learning methods BIBREF4, BIBREF5. Such mutual convertibility shows a close relation between constituent and dependency representation for the same sentence. Thus, it is a natural idea to study the relationship between constituent and dependency structures, and the joint learning of constituent and dependency parsing BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14.
For further exploit both strengths of the two representation forms for even better parsing, in this work, we propose a new model that is capable of synchronously parsing constituent and dependency.
Multitask learning (MTL) is a natural solution in neural models for multiple inputs and multiple outputs, which is adopted in this work to decode constituent and dependency in a single model. BIBREF15 indicates that when tasks are sufficiently similar, especially with syntactic nature, MTL would be useful. In contrast to previous work on deep MTL BIBREF16, BIBREF17, our model focuses on more related tasks and benefits from the strong inherent relation. At last, our model is evaluated on two benchmark treebanks for both constituent and dependency parsing. The empirical results show that our parser reaches new state-of-the-art for all parsing tasks.
Our Model
Using an encoder-decoder backbone, our model may be regarded as an extension of the constituent parsing model of BIBREF18 as shown in Figure FIGREF4. The difference is that in our model both constituent and dependency parsing share the same token representation and shared self-attention layers and each has its own individual Self-Attention Layers and subsequent processing layers. Our model includes four modules: token representation, self-attention encoder, constituent and dependency parsing decoder.
Our Model ::: Token Representation
In our model, token representation $x_i$ is composed by character, word and part-of-speech (POS) embeddings. For character-level representation, we explore two types of encoders, CharCNNs BIBREF19, BIBREF20 and CharLSTM BIBREF18, as both types have been verified their effectiveness. For word-level representation, we concatenate randomly initialized and pre-trained word embeddings. We consider two ways to compose the final token representation, summing and concatenation, $x_i$=$x_{char}$+$x_{word}$+$x_{POS}$, $x_i$=[$x_{char}$;$x_{word}$;$x_{POS}$].
Our Model ::: Self-Attention Encoder
The encoder in our model is adapted from BIBREF21 to factor explicit content and position information in the self-attention process BIBREF18. The input matrices $X = [x_1, x_2, \dots , x_n ]$ in which $x_i$ is concatenated with position embedding are transformed by a self-attention encoder. We factor the model between content and position information both in self-attention sub-layer and feed-forward network, whose setting details follow BIBREF18. We also try different numbers of shared self-attention layers in section SECREF15.
Our Model ::: Constituent Parsing Decoder
The score $s(T)$ of the constituent parsing tree $T$ is to sum every scores of span ($i$, $j$) with label $l$,
$ s(T) = \sum _{(i,j,\ell )\in T} s(i, j, \ell ). $
The goal of constituent parser is to find the tree with the highest score: $ \hat{T} = \arg \max _T s(T). $ We use CKY-style algorithm to obtain the tree $\hat{T}$ in $O(n^3)$ time complexity BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26.
This structured prediction problem is handled with satisfying the margin constraint:
$ s(T^*) \ge s(T) + \Delta (T,T^*), $
where $T^*$ denote correct parse tree and $\Delta $ is the Hamming loss on labeled spans with a slight modification during the dynamic programming search. The objective function is the hinge loss,
Our Model ::: Dependency Parsing Decoder
Similar to the constituent case, dependency parsing is to search over all possible trees to find the globally highest scoring tree. We follow BIBREF27 and BIBREF28 to predict a distribution over the possible head for each word and find the globally highest scoring tree conditional on the distribution of each word only during testing.
We use the biaffine attention mechanism BIBREF27 between each word and the candidates of the parent node:
$\alpha _{ij} = h_i^TWg_j + U^Th_i + V^T g_j + b,$
where $h_i$ and $g_i$ are calculated by a distinct one-layer perceptron network.
Dependency parser is to minimize the negative log likelihood of the golden tree $Y$, which is implemented as cross-entropy loss:
$ J_2(\theta ) = - \left(logP_{\theta }(h_i|x_i) +logP_{\theta }(l_i|x_i,h_i)\right), $
where $P_{\theta }(h_i|x_i)$ is the probability of correct parent node $h_i$ for $x_i$, and $P_{\theta }(l_i|x_i,h_i)$ is the probability of the correct dependency label $l_i$ for the child-parent pair $(x_i,h_i)$.
During parsing, we use the first-order Eisner algorithm BIBREF29 to build projective trees.
Our Model ::: Joint training
Our joint model synchronously predicts the dependency tree and the constituent tree over the same input sentence. The output of the self-attention encoder is sent to the different decoder to generate the different parse tree. Thus, the share components for two parsers include token representation layer and self-attention encoder.
We jointly train the constituent and dependency parser for minimizing the overall loss:
$J_{model}(\theta ) = J_1(\theta ) + \lambda J_2(\theta ),$
where $\lambda $ is a hyper-parameter to control the overall loss. The best performance can be achieved when $\lambda $ is set to 1.0, which turns out that both sides are equally important.
Experiments
We evaluate our model on two benchmark treebanks, English Penn Treebank (PTB) and Chinese Penn Treebank (CTB5.1) following standard data splitting BIBREF30, BIBREF31. POS tags are predicted by the Stanford Tagger BIBREF32. For constituent parsing, we use the standard evalb tool to evaluate the F1 score. For dependency parsing, we apply Stanford basic dependencies (SD) representation BIBREF4 converted by the Stanford parser. Following previous work BIBREF27, BIBREF33, we report the results without punctuations for both treebanks.
Experiments ::: Setup
We use the same experimental settings as BIBREF18. For dependency parsing, we employ two 1024-dimensional multilayer perceptrons for learning specific representation and a 1024-dimensional parameter matrix for biaffine attention. We use 100D GloVe BIBREF34 for English and structured-skipgram BIBREF35 embeddings for Chinese.
Experiments ::: Ablation Studies
All experiments in this subsection are running from token representation with summing setting.
Token Representation Different token representation combinations are evaluated in Table TABREF13. We find that CharLSTM performs a little better than CharCNNs. Moreover, POS tags on parsing performance show that predicted POS tags decreases parsing accuracy, especially without word information. If POS tags are replaced by word embeddings, the performance increases. Finally, we apply word and CharLSTM as token representation setting for our full model.
Shared Self-attention Layers As our model providing two outputs from one input, there is a bifurcation setting for how much shared part should be determined. Both constituent and dependency parsers share token representation and 8 self-attention layers at most. Assuming that either parser always takes input information flow through 8 self-attention layers as shown in Figure FIGREF4, then the number of shared self-attention layers varying from 0 to 8 may reflect the shared degree in the model. When the number is set to 0, it indicates only token representation is shared for both parsers trained for the joint loss through each own 8 self-attention layers. When the number is set to less than 8, for example, 6, then it means that both parsers first shared 6 layers from token representation then have individual 2 self-attention layers.
For different numbers of shared layers, the results are in Table TABREF14. We respectively disable the constituent and the dependency parser to obtain a separate learning setting for both parsers in our model. The comparison in Table TABREF14 indicates that even though without any shared self-attention layers, joint training of our model may significantly outperform separate learning mode. At last, the best performance is still obtained from sharing full 8 self-attention layers.
Besides, comparing UAS and LAS to F1 score, dependency parsing is shown more beneficial from our model which has more than 1% gain in UAS and LAS from parsing constituent together.
Experiments ::: Main Results
Tables TABREF17, TABREF18 and TABREF19 compare our model to existing state-of-the-art, in which indicator Separate with our model shows the results of our model learning constituent or dependency parsing separately, (Sum) and (Concat) respectively represent the results with the indicated input token representation setting. On PTB, our model achieves 93.90 F1 score of constituent parsing and 95.91 UAS and 93.86 LAS of dependency parsing. On CTB, our model achieves a new state-of-the-art result on both constituent and dependency parsing. The comparison again suggests that learning jointly in our model is superior to learning separately. In addition, we also augment our model with ELMo BIBREF48 or a larger version of BERT BIBREF49 as the sole token representation to compare with other pre-training models. Since BERT is based on sub-word, we only take the last sub-word vector of the word in the last layer of BERT as our sole token representation $x_i$. Moreover, our single model of BERT achieves competitive performance with other ensemble models.
Conclusions
This paper presents a joint model with the constituent and dependency parsing which achieves new state-of-the-art results on both Chinese and English benchmark treebanks. Our ablation studies show that joint learning of both constituent and dependency is indeed superior to separate learning mode. Also, experiments show that dependency parsing is much more beneficial from knowing the constituent structure. Our parser predicts phrase structure and head-word simultaneously which can be regarded as an effective HPSG BIBREF3 parser. | For different numbers of shared layers, the results are in Table TABREF14. We respectively disable the constituent and the dependency parser to obtain a separate learning setting for both parsers in our model. |
a6665074b067abb2676d5464f36b2cb07f6919d3 | a6665074b067abb2676d5464f36b2cb07f6919d3_0 | Q: What are the performances obtained for PTB and CTB?
Text: Introduction
Constituent and dependency are two typical syntactic structure representation forms as shown in Figure FIGREF1, which have been well studied from both linguistic and computational perspective BIBREF0, BIBREF1. In earlier time, linguists and NLP researchers discussed how to encode lexical dependencies in phrase structures, like Tree-adjoining grammar (TAG) BIBREF2 and head-driven phrase structure grammar (HPSG) BIBREF3.
Typical dependency treebanks are usually converted from constituent treebanks, though they may be independently annotated as well for the same languages. Meanwhile, constituent parsing can be accurately converted to dependencies (SD) representation by grammatical rules or machine learning methods BIBREF4, BIBREF5. Such mutual convertibility shows a close relation between constituent and dependency representation for the same sentence. Thus, it is a natural idea to study the relationship between constituent and dependency structures, and the joint learning of constituent and dependency parsing BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14.
For further exploit both strengths of the two representation forms for even better parsing, in this work, we propose a new model that is capable of synchronously parsing constituent and dependency.
Multitask learning (MTL) is a natural solution in neural models for multiple inputs and multiple outputs, which is adopted in this work to decode constituent and dependency in a single model. BIBREF15 indicates that when tasks are sufficiently similar, especially with syntactic nature, MTL would be useful. In contrast to previous work on deep MTL BIBREF16, BIBREF17, our model focuses on more related tasks and benefits from the strong inherent relation. At last, our model is evaluated on two benchmark treebanks for both constituent and dependency parsing. The empirical results show that our parser reaches new state-of-the-art for all parsing tasks.
Our Model
Using an encoder-decoder backbone, our model may be regarded as an extension of the constituent parsing model of BIBREF18 as shown in Figure FIGREF4. The difference is that in our model both constituent and dependency parsing share the same token representation and shared self-attention layers and each has its own individual Self-Attention Layers and subsequent processing layers. Our model includes four modules: token representation, self-attention encoder, constituent and dependency parsing decoder.
Our Model ::: Token Representation
In our model, token representation $x_i$ is composed by character, word and part-of-speech (POS) embeddings. For character-level representation, we explore two types of encoders, CharCNNs BIBREF19, BIBREF20 and CharLSTM BIBREF18, as both types have been verified their effectiveness. For word-level representation, we concatenate randomly initialized and pre-trained word embeddings. We consider two ways to compose the final token representation, summing and concatenation, $x_i$=$x_{char}$+$x_{word}$+$x_{POS}$, $x_i$=[$x_{char}$;$x_{word}$;$x_{POS}$].
Our Model ::: Self-Attention Encoder
The encoder in our model is adapted from BIBREF21 to factor explicit content and position information in the self-attention process BIBREF18. The input matrices $X = [x_1, x_2, \dots , x_n ]$ in which $x_i$ is concatenated with position embedding are transformed by a self-attention encoder. We factor the model between content and position information both in self-attention sub-layer and feed-forward network, whose setting details follow BIBREF18. We also try different numbers of shared self-attention layers in section SECREF15.
Our Model ::: Constituent Parsing Decoder
The score $s(T)$ of the constituent parsing tree $T$ is to sum every scores of span ($i$, $j$) with label $l$,
$ s(T) = \sum _{(i,j,\ell )\in T} s(i, j, \ell ). $
The goal of constituent parser is to find the tree with the highest score: $ \hat{T} = \arg \max _T s(T). $ We use CKY-style algorithm to obtain the tree $\hat{T}$ in $O(n^3)$ time complexity BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26.
This structured prediction problem is handled with satisfying the margin constraint:
$ s(T^*) \ge s(T) + \Delta (T,T^*), $
where $T^*$ denote correct parse tree and $\Delta $ is the Hamming loss on labeled spans with a slight modification during the dynamic programming search. The objective function is the hinge loss,
Our Model ::: Dependency Parsing Decoder
Similar to the constituent case, dependency parsing is to search over all possible trees to find the globally highest scoring tree. We follow BIBREF27 and BIBREF28 to predict a distribution over the possible head for each word and find the globally highest scoring tree conditional on the distribution of each word only during testing.
We use the biaffine attention mechanism BIBREF27 between each word and the candidates of the parent node:
$\alpha _{ij} = h_i^TWg_j + U^Th_i + V^T g_j + b,$
where $h_i$ and $g_i$ are calculated by a distinct one-layer perceptron network.
Dependency parser is to minimize the negative log likelihood of the golden tree $Y$, which is implemented as cross-entropy loss:
$ J_2(\theta ) = - \left(logP_{\theta }(h_i|x_i) +logP_{\theta }(l_i|x_i,h_i)\right), $
where $P_{\theta }(h_i|x_i)$ is the probability of correct parent node $h_i$ for $x_i$, and $P_{\theta }(l_i|x_i,h_i)$ is the probability of the correct dependency label $l_i$ for the child-parent pair $(x_i,h_i)$.
During parsing, we use the first-order Eisner algorithm BIBREF29 to build projective trees.
Our Model ::: Joint training
Our joint model synchronously predicts the dependency tree and the constituent tree over the same input sentence. The output of the self-attention encoder is sent to the different decoder to generate the different parse tree. Thus, the share components for two parsers include token representation layer and self-attention encoder.
We jointly train the constituent and dependency parser for minimizing the overall loss:
$J_{model}(\theta ) = J_1(\theta ) + \lambda J_2(\theta ),$
where $\lambda $ is a hyper-parameter to control the overall loss. The best performance can be achieved when $\lambda $ is set to 1.0, which turns out that both sides are equally important.
Experiments
We evaluate our model on two benchmark treebanks, English Penn Treebank (PTB) and Chinese Penn Treebank (CTB5.1) following standard data splitting BIBREF30, BIBREF31. POS tags are predicted by the Stanford Tagger BIBREF32. For constituent parsing, we use the standard evalb tool to evaluate the F1 score. For dependency parsing, we apply Stanford basic dependencies (SD) representation BIBREF4 converted by the Stanford parser. Following previous work BIBREF27, BIBREF33, we report the results without punctuations for both treebanks.
Experiments ::: Setup
We use the same experimental settings as BIBREF18. For dependency parsing, we employ two 1024-dimensional multilayer perceptrons for learning specific representation and a 1024-dimensional parameter matrix for biaffine attention. We use 100D GloVe BIBREF34 for English and structured-skipgram BIBREF35 embeddings for Chinese.
Experiments ::: Ablation Studies
All experiments in this subsection are running from token representation with summing setting.
Token Representation Different token representation combinations are evaluated in Table TABREF13. We find that CharLSTM performs a little better than CharCNNs. Moreover, POS tags on parsing performance show that predicted POS tags decreases parsing accuracy, especially without word information. If POS tags are replaced by word embeddings, the performance increases. Finally, we apply word and CharLSTM as token representation setting for our full model.
Shared Self-attention Layers As our model providing two outputs from one input, there is a bifurcation setting for how much shared part should be determined. Both constituent and dependency parsers share token representation and 8 self-attention layers at most. Assuming that either parser always takes input information flow through 8 self-attention layers as shown in Figure FIGREF4, then the number of shared self-attention layers varying from 0 to 8 may reflect the shared degree in the model. When the number is set to 0, it indicates only token representation is shared for both parsers trained for the joint loss through each own 8 self-attention layers. When the number is set to less than 8, for example, 6, then it means that both parsers first shared 6 layers from token representation then have individual 2 self-attention layers.
For different numbers of shared layers, the results are in Table TABREF14. We respectively disable the constituent and the dependency parser to obtain a separate learning setting for both parsers in our model. The comparison in Table TABREF14 indicates that even though without any shared self-attention layers, joint training of our model may significantly outperform separate learning mode. At last, the best performance is still obtained from sharing full 8 self-attention layers.
Besides, comparing UAS and LAS to F1 score, dependency parsing is shown more beneficial from our model which has more than 1% gain in UAS and LAS from parsing constituent together.
Experiments ::: Main Results
Tables TABREF17, TABREF18 and TABREF19 compare our model to existing state-of-the-art, in which indicator Separate with our model shows the results of our model learning constituent or dependency parsing separately, (Sum) and (Concat) respectively represent the results with the indicated input token representation setting. On PTB, our model achieves 93.90 F1 score of constituent parsing and 95.91 UAS and 93.86 LAS of dependency parsing. On CTB, our model achieves a new state-of-the-art result on both constituent and dependency parsing. The comparison again suggests that learning jointly in our model is superior to learning separately. In addition, we also augment our model with ELMo BIBREF48 or a larger version of BERT BIBREF49 as the sole token representation to compare with other pre-training models. Since BERT is based on sub-word, we only take the last sub-word vector of the word in the last layer of BERT as our sole token representation $x_i$. Moreover, our single model of BERT achieves competitive performance with other ensemble models.
Conclusions
This paper presents a joint model with the constituent and dependency parsing which achieves new state-of-the-art results on both Chinese and English benchmark treebanks. Our ablation studies show that joint learning of both constituent and dependency is indeed superior to separate learning mode. Also, experiments show that dependency parsing is much more beneficial from knowing the constituent structure. Our parser predicts phrase structure and head-word simultaneously which can be regarded as an effective HPSG BIBREF3 parser. | . On PTB, our model achieves 93.90 F1 score of constituent parsing and 95.91 UAS and 93.86 LAS of dependency parsing., On CTB, our model achieves a new state-of-the-art result on both constituent and dependency parsing. |
b0fbd4b0f02b877a0d3df1d8ccc47d90dd49147c | b0fbd4b0f02b877a0d3df1d8ccc47d90dd49147c_0 | Q: What are the models used to perform constituency and dependency parsing?
Text: Introduction
Constituent and dependency are two typical syntactic structure representation forms as shown in Figure FIGREF1, which have been well studied from both linguistic and computational perspective BIBREF0, BIBREF1. In earlier time, linguists and NLP researchers discussed how to encode lexical dependencies in phrase structures, like Tree-adjoining grammar (TAG) BIBREF2 and head-driven phrase structure grammar (HPSG) BIBREF3.
Typical dependency treebanks are usually converted from constituent treebanks, though they may be independently annotated as well for the same languages. Meanwhile, constituent parsing can be accurately converted to dependencies (SD) representation by grammatical rules or machine learning methods BIBREF4, BIBREF5. Such mutual convertibility shows a close relation between constituent and dependency representation for the same sentence. Thus, it is a natural idea to study the relationship between constituent and dependency structures, and the joint learning of constituent and dependency parsing BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14.
For further exploit both strengths of the two representation forms for even better parsing, in this work, we propose a new model that is capable of synchronously parsing constituent and dependency.
Multitask learning (MTL) is a natural solution in neural models for multiple inputs and multiple outputs, which is adopted in this work to decode constituent and dependency in a single model. BIBREF15 indicates that when tasks are sufficiently similar, especially with syntactic nature, MTL would be useful. In contrast to previous work on deep MTL BIBREF16, BIBREF17, our model focuses on more related tasks and benefits from the strong inherent relation. At last, our model is evaluated on two benchmark treebanks for both constituent and dependency parsing. The empirical results show that our parser reaches new state-of-the-art for all parsing tasks.
Our Model
Using an encoder-decoder backbone, our model may be regarded as an extension of the constituent parsing model of BIBREF18 as shown in Figure FIGREF4. The difference is that in our model both constituent and dependency parsing share the same token representation and shared self-attention layers and each has its own individual Self-Attention Layers and subsequent processing layers. Our model includes four modules: token representation, self-attention encoder, constituent and dependency parsing decoder.
Our Model ::: Token Representation
In our model, token representation $x_i$ is composed by character, word and part-of-speech (POS) embeddings. For character-level representation, we explore two types of encoders, CharCNNs BIBREF19, BIBREF20 and CharLSTM BIBREF18, as both types have been verified their effectiveness. For word-level representation, we concatenate randomly initialized and pre-trained word embeddings. We consider two ways to compose the final token representation, summing and concatenation, $x_i$=$x_{char}$+$x_{word}$+$x_{POS}$, $x_i$=[$x_{char}$;$x_{word}$;$x_{POS}$].
Our Model ::: Self-Attention Encoder
The encoder in our model is adapted from BIBREF21 to factor explicit content and position information in the self-attention process BIBREF18. The input matrices $X = [x_1, x_2, \dots , x_n ]$ in which $x_i$ is concatenated with position embedding are transformed by a self-attention encoder. We factor the model between content and position information both in self-attention sub-layer and feed-forward network, whose setting details follow BIBREF18. We also try different numbers of shared self-attention layers in section SECREF15.
Our Model ::: Constituent Parsing Decoder
The score $s(T)$ of the constituent parsing tree $T$ is to sum every scores of span ($i$, $j$) with label $l$,
$ s(T) = \sum _{(i,j,\ell )\in T} s(i, j, \ell ). $
The goal of constituent parser is to find the tree with the highest score: $ \hat{T} = \arg \max _T s(T). $ We use CKY-style algorithm to obtain the tree $\hat{T}$ in $O(n^3)$ time complexity BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26.
This structured prediction problem is handled with satisfying the margin constraint:
$ s(T^*) \ge s(T) + \Delta (T,T^*), $
where $T^*$ denote correct parse tree and $\Delta $ is the Hamming loss on labeled spans with a slight modification during the dynamic programming search. The objective function is the hinge loss,
Our Model ::: Dependency Parsing Decoder
Similar to the constituent case, dependency parsing is to search over all possible trees to find the globally highest scoring tree. We follow BIBREF27 and BIBREF28 to predict a distribution over the possible head for each word and find the globally highest scoring tree conditional on the distribution of each word only during testing.
We use the biaffine attention mechanism BIBREF27 between each word and the candidates of the parent node:
$\alpha _{ij} = h_i^TWg_j + U^Th_i + V^T g_j + b,$
where $h_i$ and $g_i$ are calculated by a distinct one-layer perceptron network.
Dependency parser is to minimize the negative log likelihood of the golden tree $Y$, which is implemented as cross-entropy loss:
$ J_2(\theta ) = - \left(logP_{\theta }(h_i|x_i) +logP_{\theta }(l_i|x_i,h_i)\right), $
where $P_{\theta }(h_i|x_i)$ is the probability of correct parent node $h_i$ for $x_i$, and $P_{\theta }(l_i|x_i,h_i)$ is the probability of the correct dependency label $l_i$ for the child-parent pair $(x_i,h_i)$.
During parsing, we use the first-order Eisner algorithm BIBREF29 to build projective trees.
Our Model ::: Joint training
Our joint model synchronously predicts the dependency tree and the constituent tree over the same input sentence. The output of the self-attention encoder is sent to the different decoder to generate the different parse tree. Thus, the share components for two parsers include token representation layer and self-attention encoder.
We jointly train the constituent and dependency parser for minimizing the overall loss:
$J_{model}(\theta ) = J_1(\theta ) + \lambda J_2(\theta ),$
where $\lambda $ is a hyper-parameter to control the overall loss. The best performance can be achieved when $\lambda $ is set to 1.0, which turns out that both sides are equally important.
Experiments
We evaluate our model on two benchmark treebanks, English Penn Treebank (PTB) and Chinese Penn Treebank (CTB5.1) following standard data splitting BIBREF30, BIBREF31. POS tags are predicted by the Stanford Tagger BIBREF32. For constituent parsing, we use the standard evalb tool to evaluate the F1 score. For dependency parsing, we apply Stanford basic dependencies (SD) representation BIBREF4 converted by the Stanford parser. Following previous work BIBREF27, BIBREF33, we report the results without punctuations for both treebanks.
Experiments ::: Setup
We use the same experimental settings as BIBREF18. For dependency parsing, we employ two 1024-dimensional multilayer perceptrons for learning specific representation and a 1024-dimensional parameter matrix for biaffine attention. We use 100D GloVe BIBREF34 for English and structured-skipgram BIBREF35 embeddings for Chinese.
Experiments ::: Ablation Studies
All experiments in this subsection are running from token representation with summing setting.
Token Representation Different token representation combinations are evaluated in Table TABREF13. We find that CharLSTM performs a little better than CharCNNs. Moreover, POS tags on parsing performance show that predicted POS tags decreases parsing accuracy, especially without word information. If POS tags are replaced by word embeddings, the performance increases. Finally, we apply word and CharLSTM as token representation setting for our full model.
Shared Self-attention Layers As our model providing two outputs from one input, there is a bifurcation setting for how much shared part should be determined. Both constituent and dependency parsers share token representation and 8 self-attention layers at most. Assuming that either parser always takes input information flow through 8 self-attention layers as shown in Figure FIGREF4, then the number of shared self-attention layers varying from 0 to 8 may reflect the shared degree in the model. When the number is set to 0, it indicates only token representation is shared for both parsers trained for the joint loss through each own 8 self-attention layers. When the number is set to less than 8, for example, 6, then it means that both parsers first shared 6 layers from token representation then have individual 2 self-attention layers.
For different numbers of shared layers, the results are in Table TABREF14. We respectively disable the constituent and the dependency parser to obtain a separate learning setting for both parsers in our model. The comparison in Table TABREF14 indicates that even though without any shared self-attention layers, joint training of our model may significantly outperform separate learning mode. At last, the best performance is still obtained from sharing full 8 self-attention layers.
Besides, comparing UAS and LAS to F1 score, dependency parsing is shown more beneficial from our model which has more than 1% gain in UAS and LAS from parsing constituent together.
Experiments ::: Main Results
Tables TABREF17, TABREF18 and TABREF19 compare our model to existing state-of-the-art, in which indicator Separate with our model shows the results of our model learning constituent or dependency parsing separately, (Sum) and (Concat) respectively represent the results with the indicated input token representation setting. On PTB, our model achieves 93.90 F1 score of constituent parsing and 95.91 UAS and 93.86 LAS of dependency parsing. On CTB, our model achieves a new state-of-the-art result on both constituent and dependency parsing. The comparison again suggests that learning jointly in our model is superior to learning separately. In addition, we also augment our model with ELMo BIBREF48 or a larger version of BERT BIBREF49 as the sole token representation to compare with other pre-training models. Since BERT is based on sub-word, we only take the last sub-word vector of the word in the last layer of BERT as our sole token representation $x_i$. Moreover, our single model of BERT achieves competitive performance with other ensemble models.
Conclusions
This paper presents a joint model with the constituent and dependency parsing which achieves new state-of-the-art results on both Chinese and English benchmark treebanks. Our ablation studies show that joint learning of both constituent and dependency is indeed superior to separate learning mode. Also, experiments show that dependency parsing is much more beneficial from knowing the constituent structure. Our parser predicts phrase structure and head-word simultaneously which can be regarded as an effective HPSG BIBREF3 parser. | token representation, self-attention encoder,, Constituent Parsing Decoder, Dependency Parsing Decoder |
3288a50701a80303fd71c8c5ede81cbee14fa2c7 | 3288a50701a80303fd71c8c5ede81cbee14fa2c7_0 | Q: Is the proposed layer smaller in parameters than a Transformer?
Text: Introduction
The capability of deep neural models of handling complex dependencies has benefited various artificial intelligence tasks, such as image recognition where test error was reduced by scaling VGG nets BIBREF0 up to hundreds of convolutional layers BIBREF1. In NLP, deep self-attention networks have enabled large-scale pretrained language models such as BERT BIBREF2 and GPT BIBREF3 to boost state-of-the-art (SOTA) performance on downstream applications. By contrast, though neural machine translation (NMT) gained encouraging improvement when shifting from a shallow architecture BIBREF4 to deeper ones BIBREF5, BIBREF6, BIBREF7, BIBREF8, the Transformer BIBREF9, a currently SOTA architecture, achieves best results with merely 6 encoder and decoder layers, and no gains were reported by BIBREF9 from further increasing its depth on standard datasets.
We start by analysing why the Transformer does not scale well to larger model depth. We find that the architecture suffers from gradient vanishing as shown in Figure FIGREF2, leading to poor convergence. An in-depth analysis reveals that the Transformer is not norm-preserving due to the involvement of and the interaction between residual connection (RC) BIBREF1 and layer normalization (LN) BIBREF10.
To address this issue, we propose depth-scaled initialization (DS-Init) to improve norm preservation. We ascribe the gradient vanishing to the large output variance of RC and resort to strategies that could reduce it without model structure adjustment. Concretely, DS-Init scales down the variance of parameters in the $l$-th layer with a discount factor of $\frac{1}{\sqrt{l}}$ at the initialization stage alone, where $l$ denotes the layer depth starting from 1. The intuition is that parameters with small variance in upper layers would narrow the output variance of corresponding RCs, improving norm preservation as shown by the dashed lines in Figure FIGREF2. In this way, DS-Init enables the convergence of deep Transformer models to satisfactory local optima.
Another bottleneck for deep Transformers is the increase in computational cost for both training and decoding. To combat this, we propose a merged attention network (MAtt). MAtt simplifies the decoder by replacing the separate self-attention and encoder-decoder attention sublayers with a new sublayer that combines an efficient variant of average-based self-attention (AAN) BIBREF11 and the encoder-decoder attention. We simplify the AAN by reducing the number of linear transformations, reducing both the number of model parameters and computational cost. The merged sublayer benefits from parallel calculation of (average-based) self-attention and encoder-decoder attention, and reduces the depth of each decoder block.
We conduct extensive experiments on WMT and IWSLT translation tasks, covering five translation tasks with varying data conditions and translation directions. Our results show that deep Transformers with DS-Init and MAtt can substantially outperform their base counterpart in terms of BLEU (+1.1 BLEU on average for 12-layer models), while matching the decoding speed of the baseline model thanks to the efficiency improvements of MAtt.
Our contributions are summarized as follows:
We analyze the vanishing gradient issue in the Transformer, and identify the interaction of residual connections and layer normalization as its source.
To address this problem, we introduce depth-scaled initialization (DS-Init).
To reduce the computational cost of training deep Transformers, we introduce a merged attention model (MAtt). MAtt combines a simplified average-attention model and the encoder-decoder attention into a single sublayer, allowing for parallel computation.
We conduct extensive experiments and verify that deep Transformers with DS-Init and MAtt improve translation quality while preserving decoding efficiency.
Related Work
Our work aims at improving translation quality by increasing model depth. Compared with the single-layer NMT system BIBREF4, deep NMT models are typically more capable of handling complex language variations and translation relationships via stacking multiple encoder and decoder layers BIBREF5, BIBREF6, BIBREF12, BIBREF8, and/or multiple attention layers BIBREF7. One common problem for the training of deep neural models are vanishing or exploding gradients. Existing methods mainly focus on developing novel network architectures so as to stabilize gradient back-propagation, such as the fast-forward connection BIBREF5, the linear associative unit BIBREF13, or gated recurrent network variants BIBREF14, BIBREF15, BIBREF16, BIBREF17. In contrast to the above recurrent network based NMT models, recent work focuses on feed-forward alternatives with more smooth gradient flow, such as convolutional networks BIBREF18 and self-attention networks BIBREF9.
The Transformer represents the current SOTA in NMT. It heavily relies on the combination of residual connections BIBREF1 and layer normalization BIBREF10 for convergence. Nevertheless, simply extending this model with more layers results in gradient vanishing due to the interaction of RC and LN (see Section SECREF4). Recent work has proposed methods to train deeper Transformer models, including a rescheduling of RC and LN BIBREF19, the transparent attention model BIBREF20 and the stochastic residual connection BIBREF21. In contrast to these work, we identify the large output variance of RC as the source of gradient vanishing, and employ scaled initialization to mitigate it without any structure adjustment. The effect of careful initialization on boosting convergence was also investigated and verified in previous work BIBREF22, BIBREF23, BIBREF2, BIBREF3.
The merged attention network falls into the category of simplifying the Transformer so as to shorten training and/or decoding time. Methods to improve the Transformer's running efficiency range from algorithmic improvements BIBREF24, non-autoregressive translation BIBREF25, BIBREF26 to decoding dependency reduction such as average attention network BIBREF11 and blockwise parallel decoding BIBREF27. Our MAtt builds upon the AAN model, further simplifying the model by reducing the number of linear transformations, and combining it with the encoder-decoder attention. In work concurrent to ours, BIBREF28 propose the evolved Transformer which, based on automatic architecture search, also discovered a parallel structure of self-attention and encoder-decoder attention.
Background: Transformer
Given a source sequence $\mathbf {X}=\lbrace x_1, x_2, \ldots , x_n\rbrace \in \mathbb {R}^{n\times d}$, the Transformer predicts a target sequence $\mathbf {Y}=\lbrace y_1, y_2, \ldots , y_m\rbrace $ under the encoder-decoder framework. Both the encoder and the decoder in the Transformer are composed of attention networks, functioning as follows:
where $\mathbf {Z}_x \in \mathbb {R}^{I\times d}$ and $\mathbf {Z}_y \in \mathbb {R}^{J\times d}$ are input sequence representations of length $I$ and $J$ respectively, $\mathbf {W}_* \in \mathbb {R}^{d\times d}$ denote weight parameters. The attention network can be further enhanced with multi-head attention BIBREF9.
Formally, the encoder stacks $L$ identical layers, each including a self-attention sublayer (Eq. DISPLAY_FORM8) and a point-wise feed-forward sublayer (Eq. ):
$\mathbf {H}^l \in \mathbb {R}^{n\times d}$ denotes the sequence representation of the $l$-th encoder layer. Input to the first layer $\mathbf {H}^0$ is the element-wise addition of the source word embedding $\mathbf {X}$ and the corresponding positional encoding. $\textsc {Ffn}(\cdot )$ is a two-layer feed-forward network with a large intermediate representation and $\text{ReLU}$ activation function. Each encoder sublayer is wrapped with a residual connection (Eq. DISPLAY_FORM9), followed by layer normalization (Eq. ):
where $\mathbf {z}$ and $\mathbf {z}^\prime $ are input vectors, and $\odot $ indicates element-wise multiplication. $\mu $ and $\sigma $ denote the mean and standard deviation statistics of vector $\mathbf {z}$. The normalized $\mathbf {z}$ is then re-scaled and re-centered by trainable parameters $\mathbf {g}$ and $\mathbf {b}$ individually.
The decoder also consists of $L$ identical layers, each of them extends the encoder sublayers with an encoder-decoder attention sublayer (Eq. ) to capture translation alignment from target words to relevant source words:
$\mathbf {S}^l \in \mathbb {R}^{m\times d}$ is the sequence representation of the $l$-th decoder layer. Input $\mathbf {S}^0$ is defined similar to $\mathbf {H}^0$. To ensure auto-regressive decoding, the attention weights in Eq. DISPLAY_FORM10 are masked to prevent attention to future target tokens.
The Transformer's parameters are typically initialized by sampling from a uniform distribution:
where $d_i$ and $d_o$ indicate input and output dimension separately. This initialization has the advantage of maintaining activation variances and back-propagated gradients variance and can help train deep neural networks BIBREF29.
Vanishing Gradient Analysis
One natural way to deepen Transformer is simply enlarging the layer number $L$. Unfortunately, Figure FIGREF2 shows that this would give rise to gradient vanishing on both the encoder and the decoder at the lower layers, and that the case on the decoder side is worse. We identified a structural problem in the Transformer architecture that gives rise to this issue, namely the interaction of RC and LN, which we will here discuss in more detail.
Given an input vector $\mathbf {z} \in \mathbb {R}^d$, let us consider the general structure of RC followed by LN:
where $\mathbf {r}, \mathbf {o} \in \mathbb {R}^d$ are intermediate outputs. $f(\cdot )$ represents any neural network, such as recurrent, convolutional or attention network, etc. Suppose during back-propagation, the error signal at the output of LN is $\mathbf {\delta }_o$. Contributions of RC and LN to the error signal are as follows:
where $\mathbf {\bar{r}}$ denotes the normalized input. $\mathbf {I}$ is the identity matrix and $\text{diag}(\cdot )$ establishes a diagonal matrix from its input. The resulting $\mathbf {\delta }_r$ and $\mathbf {\delta }_z$ are error signals arrived at output $\mathbf {r}$ and $\mathbf {z}$ respectively.
We define the change of error signal as follows:
where $\beta $ (or model ratio), $\beta _{\textsc {Ln}}$ (or LN ratio) and $\beta _{\textsc {Rc}}$ (or RC ratio) measure the gradient norm ratio of the whole residual block, the layer normalization and the residual connection respectively. Informally, a neural model should preserve the gradient norm between layers ($\beta \approx 1$) so as to allow training of very deep models BIBREF30.
We resort to empirical evidence to analyze these ratios. Results in Table TABREF16 show that LN weakens error signal ($\beta _{\textsc {Ln}} < 1$) but RC strengthens it ($\beta _{\textsc {Rc}} > 1$). One explanation about LN's decay effect is the large output variance of RC ($\text{Var}(\mathbf {r}) > 1$) which negatively affects $\mathbf {\delta }_r$ as shown in Eq. DISPLAY_FORM13. By contrast, the short-cut in RC ensures that the error signal at higher layer $\mathbf {\delta }_r$ can always be safely carried on to lower layer no matter how complex $\frac{\partial f}{\partial \mathbf {z}}$ would be as in Eq. , increasing the ratio.
Depth-Scaled Initialization
Results on the model ratio show that self-attention sublayer has a (near) increasing effect ($\beta > 1$) that intensifies error signal, while feed-forward sublayer manifests a decreasing effect ($\beta < 1$). In particular, though the encoder-decoder attention sublayer and the self-attention sublayer share the same attention formulation, the model ratio of the former is smaller. As shown in Eq. and DISPLAY_FORM7, part of the reason is that encoder-decoder attention can only back-propagate gradients to lower layers through the query representation $\mathbf {Q}$, bypassing gradients at the key $\mathbf {K}$ and the value $\mathbf {V}$ to the encoder side. This negative effect explains why the decoder suffers from more severe gradient vanishing than the encoder in Figure FIGREF2.
The gradient norm is preserved better through the self-attention layer than the encoder-decoder attention, which offers insights on the successful training of the deep Transformer in BERT BIBREF2 and GPT BIBREF3, where encoder-decoder attention is not involved. However, results in Table TABREF16 also suggests that the self-attention sublayer in the encoder is not strong enough to counteract the gradient loss in the feed-forward sublayer. That is why BERT and GPT adopt a much smaller standard deviation (0.02) for initialization, in a similar spirit to our solution.
We attribute the gradient vanishing issue to the large output variance of RC (Eq. DISPLAY_FORM13). Considering that activation variance is positively correlated with parameter variance BIBREF29, we propose DS-Init and change the original initialization method in Eq. DISPLAY_FORM11 as follows:
where $\alpha $ is a hyperparameter in the range of $[0, 1]$ and $l$ denotes layer depth. Hyperparameter $\alpha $ improves the flexibility of our method. Compared with existing approaches BIBREF19, BIBREF20, our solution does not require modifications in the model architecture and hence is easy to implement.
According to the property of uniform distribution, the variance of model parameters decreases from $\frac{\gamma ^2}{3}$ to $\frac{\gamma ^2\alpha ^2}{3l}$ after applying DS-Init. By doing so, a higher layer would have smaller output variance of RC so that more gradients can flow back. Results in Table TABREF16 suggest that DS-Init narrows both the variance and different ratios to be $\sim $1, ensuring the stability of gradient back-propagation. Evidence in Figure FIGREF2 also shows that DS-Init helps keep the gradient norm and slightly increases it on the encoder side. This is because DS-Init endows lower layers with parameters of larger variance and activations of larger norm. When error signals at different layers are of similar scale, the gradient norm at lower layers would be larger. Nevertheless, this increase does not hurt model training based on our empirical observation.
DS-Init is partially inspired by the Fixup initialization BIBREF22. Both of them try to reduce the output variance of RC. The difference is that Fixup focuses on overcoming gradient explosion cased by consecutive RCs and seeks to enable training without LN but at the cost of carefully handling parameter initialization of each matrix transformation, including manipulating initialization of different bias and scale terms. Instead, DS-Init aims at solving gradient vanishing in deep Transformer caused by the structure of RC followed by LN. We still employ LN to standardize layer activation and improve model convergence. The inclusion of LN ensures the stability and simplicity of DS-Init.
Merged Attention Model
With large model depth, deep Transformer unavoidably introduces high computational overhead. This brings about significantly longer training and decoding time. To remedy this issue, we propose a merged attention model for decoder that integrates a simplified average-based self-attention sublayer into the encoder-decoder attention sublayer. Figure FIGREF17 highlights the difference.
The AAN model (Figure FIGREF19), as an alternative to the self-attention model (Figure FIGREF18), accelerates Transformer decoding by allowing decoding in linear time, avoiding the $\mathcal {O}(n^2)$ complexity of the self-attention mechanism BIBREF11. Unfortunately, the gating sublayer and the feed-forward sublayer inside AAN reduce the empirical performance improvement. We propose a simplified AAN by removing all matrix computation except for two linear projections:
where $\mathbf {M}_a$ denotes the average mask matrix for parallel computation BIBREF11. This new model is then combined with the encoder-decoder attention as shown in Figure FIGREF20:
The mapping $\mathbf {W}_o$ is shared for $\textsc {SAan}$ and $\textsc {Att}$. After combination, MAtt allows for the parallelization of AAN and encoder-decoder attention. | No |
22b8836cb00472c9780226483b29771ae3ebdc87 | 22b8836cb00472c9780226483b29771ae3ebdc87_0 | Q: What is the new initialization method proposed in this paper?
Text: Introduction
Named Entity Disambiguation (NED) is the task of linking mentions of entities in text to a given knowledge base, such as Freebase or Wikipedia. NED is a key component in Entity Linking (EL) systems, focusing on the disambiguation task itself, independently from the tasks of Named Entity Recognition (detecting mention bounds) and Candidate Generation (retrieving the set of potential candidate entities). NED has been recognized as an important component in NLP tasks such as semantic parsing BIBREF0 .
Current research on NED is mostly driven by a number of standard datasets, such as CoNLL-YAGO BIBREF1 , TAC KBP BIBREF2 and ACE BIBREF3 . These datasets are based on news corpora and Wikipedia, which are naturally coherent, well-structured, and rich in context. Global disambiguation models BIBREF4 , BIBREF5 , BIBREF6 leverage this coherency by jointly disambiguating all the mentions in a single document. However, domains such as web-page fragments, social media, or search queries, are often short, noisy, and less coherent; such domains lack the necessary contextual information for global methods to pay off, and present a more challenging setting in general.
In this work, we investigate the task of NED in a setting where only local and noisy context is available. In particular, we create a dataset of 3.2M short text fragments extracted from web pages, each containing a mention of a named entity. Our dataset is far larger than previously collected datasets, and contains 18K unique mentions linking to over 100K unique entities. We have empirically found it to be noisier and more challenging than existing datasets. For example:
“I had no choice but to experiment with other indoor games. I was born in Atlantic City so the obvious next choice was Monopoly. I played until I became a successful Captain of Industry.”
This short fragment is considerably less structured and with a more personal tone than a typical news article. It references the entity Monopoly_(Game), however expressions such as “experiment” and “Industry” can distract a naive disambiguation model because they are also related the much more common entity Monopoly (economics term). Some sense of local semantics must be considered in order to separate the useful signals (e.g. “indoor games”, “played”) from the noisy ones.
We therefore propose a new model that leverages local contextual information to disambiguate entities. Our neural approach (based on RNNs with attention) leverages the vast amount of training data in WikilinksNED to learn representations for entity and context, allowing it to extract signals from noisy and unexpected context patterns.
While convolutional neural networks BIBREF7 , BIBREF8 and probabilistic attention BIBREF9 have been applied to the task, this is the first model to use RNNs and a neural attention model for NED. RNNs account for the sequential nature of textual context while the attention model is applied to reduce the impact of noise in the text.
Our experiments show that our model significantly outperforms existing state-of-the-art NED algorithms on WikilinksNED, suggesting that RNNs with attention are able to model short and noisy context better than current approaches. In addition, we evaluate our algorithm on CoNLL-YAGO BIBREF1 , a dataset of annotated news articles. We use a simple domain adaptation technique since CoNLL-YAGO lacks a large enough training set for our model, and achieve comparable results to other state-of-the-art methods. These experiments highlight the difference between the two datasets, indicating that our NED benchmark is substantially more challenging.
Code and data used for our experiments can be found at https://github.com/yotam-happy/NEDforNoisyText
The WikilinksNED Dataset: Entity Mentions in the Web
We introduce WikilinksNED, a large-scale NED dataset based on text fragments from the web. Our dataset is derived from the Wikilinks corpus BIBREF14 , which was constructed by crawling the web and collecting hyperlinks (mentions) linking to Wikipedia concepts (entities) and their surrounding text (context). Wikilinks contains 40 million mentions covering 3 million entities, collected from over 10 million web pages.
Wikilinks can be seen as a large-scale, naturally-occurring, crowd-sourced dataset where thousands of human annotators provide ground truths for mentions of interest. This means that the dataset contains various kinds of noise, especially due to incoherent contexts. The contextual noise presents an interesting test-case that supplements existing datasets that are sourced from mostly coherent and well-formed text.
To get a sense of textual noise we have set up a small experiment where we measure the similarity between entities mentioned in WikilinksNED and their surrounding context, and compare the results to CoNLL-YAGO. We use state-of-the-art word and entity embeddings obtained from yamada2016joint and compute cosine similarity between embeddings of the correct entity assignment and the mean of context words. We compare results from all mentions in CoNLL-YAGO to a sample of 50000 web fragments taken from WikilinksNED, using a window of words of size 40 around entity mentions. We find that similarity between context and correct entity is indeed lower for web mentions ( $0.163$ ) than for CoNLL-YAGO mentions ( $0.188$ ), and find this result to be statistically significant with very high probability ( $p<10^{-5}$ ) . This result indicates that web fragments in WikilinksNED are indeed noisier compared to CoNLL-YAGO documents.
We prepare our dataset from the local-context version of Wikilinks, and resolve ground-truth links using a Wikipedia dump from April 2016. We use the page and redirect tables for resolution, and keep the database pageid column as a unique identifier for Wikipedia entities. We discard mentions where the ground-truth could not be resolved (only 3% of mentions).
We collect all pairs of mention $m$ and entity $e$ appearing in the dataset, and compute the number of times $m$ refers to $e$ ( $\#(m,e)$ ), as well as the conditional probability of $e$ given $m$ : $P(e|m)=\#(m,e)/\sum _{e^{\prime }}\#(m,e^{\prime })$ . Examining these distributions reveals many mentions belong to two extremes – either they have very little ambiguity, or they appear in the dataset only a handful of times and refer to different entities only a couple of times each. We deem the former to be less interesting for the purpose of NED, and suspect the latter to be noise with high probability. To filter these cases, we keep only mentions for which at least two different entities have 10 mentions each ( $\#(m,e) \ge 10$ ) and consist of at least 10% of occurrences ( $P(e|m) \ge 0.1$ ). This procedure aggressively filters our dataset and we are left with $e$0 mentions.
Finally, we randomly split the data into train (80%), validation (10%), and test (10%), according to website domains in order to minimize lexical memorization BIBREF18 .
Algorithm
Our DNN model is a discriminative model which takes a pair of local context and candidate entity, and outputs a probability-like score for the candidate entity being correct. Both words and entities are represented using embedding dictionaries and we interpret local context as a window-of-words to the left and right of a mention. The left and right contexts are fed into a duo of Attention-RNN (ARNN) components which process each side and produce a fixed length vector representation. The resulting vectors are concatenated and along with the entity embedding are and then fed into a classifier network with two output units that are trained to emit a probability-like score of the candidate being a correct or corrupt assignment.
Model Architecture
Figure 1 illustrates the main components of our architecture: an embedding layer, a duo of ARNNs, each processing one side of the context (left and right), and a classifier.
The embedding layer first embeds both the entity and the context words as vectors (300 dimensions each).
The ARNN unit is composed from an RNN and an attention mechanism. Equation 10 represents the general semantics of an RNN unit. An RNN reads a sequence of vectors $\lbrace v_t\rbrace $ and maintains a hidden state vector $\lbrace h_t\rbrace $ . At each step a new hidden state is computed based on the previous hidden state and the next input vector using some function $f$ , and an output is computed using $g$ . This allows the RNN to “remember” important signals while scanning the context and to recognize signals spanning multiple words.
$$\begin{aligned} & h_t=f_{\Theta _1}(h_{t-1}, v_t) \\ & o_t=g_{\Theta _2}(h_t) \end{aligned}$$ (Eq. 10)
Our implementation uses a standard GRU unit BIBREF19 as an RNN. We fit the RNN unit with an additional attention mechanism, commonly used with state-of-the-art encoder-decoder models BIBREF20 , BIBREF21 . Since our model lacks a decoder, we use the entity embedding as a control signal for the attention mechanism.
Equation 11 details the equations governing the attention model.
$$\begin{aligned} & a_t \in \mathbb {R}; a_t=r_{\Theta _3}(o_t, v_{candidate}) \\ & a^{\prime }_t = \frac{1}{\sum _{i=1}^{t} \exp \lbrace a_i\rbrace } \exp \lbrace a_t\rbrace \\ & o_{attn}=\sum _{t} a^{\prime }_t o_t \end{aligned}$$ (Eq. 11)
The function $r$ computes an attention value at each step, using the RNN output $o_t$ and the candidate entity $v_{candidate}$ . The final output vector $o_{attn}$ is a fixed-size vector, which is the sum of all the output vectors of the RNN weighted according to the attention values. This allows the attention mechanism to decide on the importance of different context parts when examining a specific candidate. We follow bahdanau2014neural and parametrize the attention function $r$ as a single layer NN as shown in equation 12 .
$$r_{\Theta _3}(o_t, v_{candidate}) = Ao_t + Bv_{candidate} + b \\$$ (Eq. 12)
The classifier network consists of a hidden layer and an output layer with two output units in a softmax. The output units are trained by optimizing a cross-entropy loss function.
Training
We assume our model is only given training examples for correct entity assignments and therefore use corrupt-sampling, where we automatically generate examples of wrong assignments. For each context-entity pair $(c,e)$ , where $e$ is the correct assignment for $c$ , we produce $k$ corrupt examples with the same context $c$ but with a different, corrupt entity $e^{\prime }$ . We considered two alternatives for corrupt sampling and provide an empirical comparison of the two approaches (Section "Evaluation" ):
Near-Misses: Sampling out of the candidate set of each mention. We have found this to be more effective where the training data reliably reflects the test-set distribution.
All-Entity: Sampling from the entire dictionary of entities. Better suited to cases where the training data or candidate generation does not reflect the test-set well. Has an added benefit of allowing us to utilize unambiguous training examples where only a single candidate is found.
We sample corrupt examples uniformly in both alternatives since with uniform sampling the ratio between the number of positive and negative examples of an entity is higher for popular entities, thus biasing the network towards popular entities. In the All-Entity case, this ratio is approximately proportional to the prior probability of the entity.
We note that preliminary experiments revealed that corrupt-sampling according to the distribution of entities in the dataset (as is done by Mikolov at el. mikolov2013distributed), rather than uniform sampling, did not perform well in our settings due to the lack of biasing toward popular entities.
Model optimization was carried out using standard backpropagation and an AdaGrad optimizer BIBREF22 . We allowed the error to propagate through all parts of the network and fine tune all trainable parameters, including the word and entity embeddings themselves. We found the performance of our model substantially improves for the first few epochs and then continues to slowly converge with marginal gains, and therefore trained all models for 8 epochs with $k=5$ for corrupt-sampling.
Embedding Initialization
Training our model implicitly embeds the vocabulary of words and collection of entities in a common space. However, we found that explicitly initializing these embeddings with vectors pre-trained over a large collection of unlabeled data significantly improved performance (see Section "Effects of initialized embeddings and corrupt-sampling schemes" ). To this end, we implemented an approach based on the Skip-Gram with Negative-Sampling (SGNS) algorithm by mikolov2013distributed that simultaneously trains both word and entity vectors.
We used word2vecf BIBREF23 , which allows one to train word and context embeddings using arbitrary definitions of "word" and "context" by providing a dataset of word-context pairs $(w,c)$ , rather than a textual corpus. In our usage, we define a context as an entity $e$ . To compile a dataset of $(w,e)$ pairs, we consider every word $w$ that appeared in the Wikipedia article describing entity $e$ . We limit our vocabularies to words that appeared at least 20 times in the corpus and entities that contain at least 20 words in their articles. We ran the process for 10 epochs and produced vectors of 300 dimensions; other hyperparameters were set to their defaults.
levy2014neural showed that SGNS implicitly factorizes the word-context PMI matrix. Our approach is doing the same for the word-entity PMI matrix, which is highly related to the word-entity TFIDF matrix used in Explicit Semantic Analysis BIBREF24 .
Evaluation
In this section, we describe our experimental setup and compare our model to the state of the art on two datasets: our new WikilinksNED dataset, as well as the commonly-used CoNLL-YAGO dataset BIBREF1 . We also examine the effect of different corrupt-sampling schemes, and of initializing our model with pre-trained word and entity embeddings.
In all experiments, our model was trained with fixed-size left and right contexts (20 words in each side). We used a special padding symbol when the actual context was shorter than the window. Further, we filtered stopwords using NLTK's stop-word list prior to selecting the window in order to focus on more informative words. Our model was implemented using the Keras BIBREF25 and Tensorflow BIBREF26 libraries.
WikilinksNED
we use Near-Misses corrupt-sampling which was found to perform well due to a large training set that represents the test set well.
To isolate the effect of candidate generation algorithms, we used the following simple method for all systems: given a mention $m$ , consider all candidate entities $e$ that appeared as the ground-truth entity for $m$ at least once in the training corpus. This simple method yields $97\%$ ground-truth recall on the test set.
Since we are the first to evaluate NED algorithms on WikilinksNED, we ran a selection of existing local NED systems and compared their performance to our algorithm's.
Yamada et al. yamada2016joint created a state-of-the-art NED system that models entity-context similarity with word and entity embeddings trained using the skip-gram model. We obtained the original embeddings from the authors, and trained the statistical features and ranking model on the WikilinksNED training set. Our configuration of Yamada et al.'s model used only their local features.
Cheng et al. Cheng2013 have made their global NED system publicly available. This algorithm uses GLOW BIBREF10 for local disambiguation. We compare our results to the ranking step of the algorithm, without the global component. Due to the long running time of this system, we only evaluated their method on the smaller test set, which contains 10,000 randomly sampled instances from the full 320,000-example test set.
Finally, we include the Most Probable Sense (MPS) baseline, which selects the entity that was seen most with the given mention during training.
We used standard micro P@1 accuracy for evaluation. Experimental results comparing our model with the baselines are reported in Table 1 . Our RNN model significantly outperforms Yamada at el. on this data by over 5 points, indicating that the more expressive RNNs are indeed beneficial for this task. We find that the attention mechanism further improves our results by a small, yet statistically significant, margin.
CoNLL-YAGO
CoNLL-YAGO has a training set with 18505 non-NIL mentions, which our experiments showed is not sufficient to train our model on. To fit our model to this dataset we first used a simple domain adaptation technique and then incorporated a number of basic statistical and string based features.
We used a simple domain adaptation technique where we first trained our model on an available large corpus of label data derived from Wikipedia, and then trained the resulting model on the smaller training set of CoNLL BIBREF27 . The Wikipedia corpus was built by extracting all cross-reference links along with their context, resulting in over 80 million training examples. We trained our model with All-Entity corrupt sampling for 1 epoch on this data. The resulting model was then adapted to CoNLL-YAGO by training 1 epoch on CoNLL-YAGO's training set, where corrupt examples were produced by considering all possible candidates for each mention as corrupt-samples (Near-Misses corrupt sampling).
We proceeded to use the model in a similar setting to yamada2016joint where a Gradient Boosting Regression Tree (GBRT) BIBREF28 model was trained with our model's prediction as a feature along with a number of statistical and string based features defined by Yamada. The statistical features include entity prior probability, conditional probability, number of candidates for the given mention and maximum conditional probability of the entity in the document. The string based features include edit distance between mention and entity title and two boolean features indicating whether the entity title starts or ends with the mention and vice versa. The GBRT model parameters where set to the values reported as optimal by Yamada.
For comparability with existing methods we used two publicly available candidates datasets: (1) PPRforNED - Pershina at el. pershina2015personalized; (2) YAGO - Hoffart at el. hoffart2011robust.
As a baseline we took the standard Most Probable Sense (MPS) prediction, which selects the entity that was seen most with the given mention during training. We also compare to the following papers - Francis-Landau et al. francis2016capturing, Yamada at el. yamada2016joint, and Chisholm et al. chisholm2015entity, as they are all strong local approaches and a good source for comparison.
Table 2 displays the micro and macro P@1 scores on CoNLL-YAGO test-b for the different training steps. We find that when using only the training set of CoNLL-YAGO our model is under-trained and that the domain adaptation significant boosts performance. We find that incorporating extra statistical and string features yields a small extra improvement in performance.
The final micro and macro P@1 scores on CoNLL-YAGO test-b are displayed in table 3 . On this dataset our model achieves comparable results, however it does not outperform the state-of-the-art, probably because of the relatively small training set and our reliance on domain adaptation.
Effects of initialized embeddings and corrupt-sampling schemes
We performed a study of the effects of using pre-initialized embeddings for our model, and of using either All-Entity or Near-Misses corrupt-sampling. The evaluation was done on a $10\%$ sample of the evaluation set of the WikilinksNED corpus and can be seen in Table 4 .
We have found that using pre-initialized embeddings results in significant performance gains, due to the better starting point. We have also found that using Near-Misses, our model achieves significantly improved performance. We attribute this difference to the more efficient nature of training with near misses. Both these results were found to be statistically significant.
Error Analysis
We randomly sampled and manually analyzed 200 cases of prediction errors made by our model. This set was obtained from WikilinksNED's validation set that was not used for training.
Working with crowd-sourced data, we expected some errors to result from noise in the ground truths themselves. Indeed, we found that $19.5$ % (39/200) of the errors were not false, out of which $5\%$ (2) where wrong labels, $33\%$ (13) were predictions with an equivalent meaning as the correct entity, and in $61.5\%$ (24) our model suggested a more convincing solution than the original author by using specific hints from the context. In this manner, the mention 'Supreme leader' , which was contextually associated to the Iranian leader Ali Khamenei, was linked by our model with 'supreme leader of Iran' while the "correct" tag was the general 'supreme leader' entity.
In addition, $15.5\%$ (31/200) were cases where a Wikipedia disambiguation-page was either the correct or predicted entity ( $2.5\%$ and $14\%$ , respectively). We considered the rest of the 130 errors as true semantic errors, and analyzed them in-depth.
First, we noticed that in $31.5$ % of the true errors (41/130) our model selected an entity that can be understood as a specific ( $6.5$ %) or general (25%) realization of the correct solution. For example, instead of predicting 'Aroma of wine' for a text on the scent and flavor of Turkish wine, the model assigned the mention 'Aroma' with the general 'Odor' entity. We observed that in 26% (34/130) of the error cases, the predicted entity had a very strong semantic relationship to the correct entity. A closer look discovered two prominent types of 'almost correct' errors occurred repeatedly in the data. The first was a film/book/theater type of error ( $8.4$ %), where the actual and the predicted entities were a different display of the same narrative. Even though having different jargon and producers, those fields share extremely similar content, which may explain why they tend to be frequently confused by the algorithm. A third (4/14) of those cases were tagged as truly ambiguous even for human reader. The second prominent type of 'almost correct' errors where differentiating between adjectives that are used to describe properties of a nation. Particularity, mentions such as 'Germanic', 'Chinese' and 'Dutch' were falsely assigned to entities that describe language instead of people, and vice versa. We observed this type of mistake in $8.4$ % of the errors (11/130).
Another interesting type of errors where in cases where the correct entity had insufficient training. We defined insufficient training errors as errors where the correct entity appeared less than 10 times in the training data. We saw that the model followed the MPS in 75% of these cases, showing that our model tends to follow the baseline in such cases. Further, the amount of generalization error in insufficient-training conditions was also significant ( $35.7\%$ ), as our model tended to select more general entities.
Conclusions
Our results indicate that the expressibility of attention-RNNs indeed allows us to extract useful features from noisy context, when sufficient amounts of training examples are available. This allows our model to significantly out-perform existing state-of-the-art models. We find that both using pre-initialized embedding vocabularies, and the corrupt-sampling method employed are very important for properly training our model.
However, the gap between results of all systems tested on both CoNLL-YAGO and WikilinksNED indicates that mentions with noisy context are indeed a challenging test. We believe this to be an important real-world scenario, that represents a distinct test-case that fills a gap between existing news-based datasets and the much noisier Twitter data BIBREF29 that has received increasing attention. We find recurrent neural models are a promising direction for this task.
Finally, our error analysis shows a number of possible improvements that should be addressed. Since we use the training set for candidate generation, non-nonsensical candidates (i.e. disambiguation pages) cause our model to err and should be removed from the candidate set. In addition, we observe that lack of sufficient training for long-tail entities is still a problem, even when a large training set is available. We believe this, and some subtle semantic cases (book/movie) can be at least partially addressed by considering semantic properties of entities, such as types and categories. We intend to address these issues in future work. | They initialize their word and entity embeddings with vectors pre-trained over a large corpus of unlabeled data. |
540e9db5595009629b2af005e3c06610e1901b12 | 540e9db5595009629b2af005e3c06610e1901b12_0 | Q: How was a quality control performed so that the text is noisy but the annotations are accurate?
Text: Introduction
Named Entity Disambiguation (NED) is the task of linking mentions of entities in text to a given knowledge base, such as Freebase or Wikipedia. NED is a key component in Entity Linking (EL) systems, focusing on the disambiguation task itself, independently from the tasks of Named Entity Recognition (detecting mention bounds) and Candidate Generation (retrieving the set of potential candidate entities). NED has been recognized as an important component in NLP tasks such as semantic parsing BIBREF0 .
Current research on NED is mostly driven by a number of standard datasets, such as CoNLL-YAGO BIBREF1 , TAC KBP BIBREF2 and ACE BIBREF3 . These datasets are based on news corpora and Wikipedia, which are naturally coherent, well-structured, and rich in context. Global disambiguation models BIBREF4 , BIBREF5 , BIBREF6 leverage this coherency by jointly disambiguating all the mentions in a single document. However, domains such as web-page fragments, social media, or search queries, are often short, noisy, and less coherent; such domains lack the necessary contextual information for global methods to pay off, and present a more challenging setting in general.
In this work, we investigate the task of NED in a setting where only local and noisy context is available. In particular, we create a dataset of 3.2M short text fragments extracted from web pages, each containing a mention of a named entity. Our dataset is far larger than previously collected datasets, and contains 18K unique mentions linking to over 100K unique entities. We have empirically found it to be noisier and more challenging than existing datasets. For example:
“I had no choice but to experiment with other indoor games. I was born in Atlantic City so the obvious next choice was Monopoly. I played until I became a successful Captain of Industry.”
This short fragment is considerably less structured and with a more personal tone than a typical news article. It references the entity Monopoly_(Game), however expressions such as “experiment” and “Industry” can distract a naive disambiguation model because they are also related the much more common entity Monopoly (economics term). Some sense of local semantics must be considered in order to separate the useful signals (e.g. “indoor games”, “played”) from the noisy ones.
We therefore propose a new model that leverages local contextual information to disambiguate entities. Our neural approach (based on RNNs with attention) leverages the vast amount of training data in WikilinksNED to learn representations for entity and context, allowing it to extract signals from noisy and unexpected context patterns.
While convolutional neural networks BIBREF7 , BIBREF8 and probabilistic attention BIBREF9 have been applied to the task, this is the first model to use RNNs and a neural attention model for NED. RNNs account for the sequential nature of textual context while the attention model is applied to reduce the impact of noise in the text.
Our experiments show that our model significantly outperforms existing state-of-the-art NED algorithms on WikilinksNED, suggesting that RNNs with attention are able to model short and noisy context better than current approaches. In addition, we evaluate our algorithm on CoNLL-YAGO BIBREF1 , a dataset of annotated news articles. We use a simple domain adaptation technique since CoNLL-YAGO lacks a large enough training set for our model, and achieve comparable results to other state-of-the-art methods. These experiments highlight the difference between the two datasets, indicating that our NED benchmark is substantially more challenging.
Code and data used for our experiments can be found at https://github.com/yotam-happy/NEDforNoisyText
The WikilinksNED Dataset: Entity Mentions in the Web
We introduce WikilinksNED, a large-scale NED dataset based on text fragments from the web. Our dataset is derived from the Wikilinks corpus BIBREF14 , which was constructed by crawling the web and collecting hyperlinks (mentions) linking to Wikipedia concepts (entities) and their surrounding text (context). Wikilinks contains 40 million mentions covering 3 million entities, collected from over 10 million web pages.
Wikilinks can be seen as a large-scale, naturally-occurring, crowd-sourced dataset where thousands of human annotators provide ground truths for mentions of interest. This means that the dataset contains various kinds of noise, especially due to incoherent contexts. The contextual noise presents an interesting test-case that supplements existing datasets that are sourced from mostly coherent and well-formed text.
To get a sense of textual noise we have set up a small experiment where we measure the similarity between entities mentioned in WikilinksNED and their surrounding context, and compare the results to CoNLL-YAGO. We use state-of-the-art word and entity embeddings obtained from yamada2016joint and compute cosine similarity between embeddings of the correct entity assignment and the mean of context words. We compare results from all mentions in CoNLL-YAGO to a sample of 50000 web fragments taken from WikilinksNED, using a window of words of size 40 around entity mentions. We find that similarity between context and correct entity is indeed lower for web mentions ( $0.163$ ) than for CoNLL-YAGO mentions ( $0.188$ ), and find this result to be statistically significant with very high probability ( $p<10^{-5}$ ) . This result indicates that web fragments in WikilinksNED are indeed noisier compared to CoNLL-YAGO documents.
We prepare our dataset from the local-context version of Wikilinks, and resolve ground-truth links using a Wikipedia dump from April 2016. We use the page and redirect tables for resolution, and keep the database pageid column as a unique identifier for Wikipedia entities. We discard mentions where the ground-truth could not be resolved (only 3% of mentions).
We collect all pairs of mention $m$ and entity $e$ appearing in the dataset, and compute the number of times $m$ refers to $e$ ( $\#(m,e)$ ), as well as the conditional probability of $e$ given $m$ : $P(e|m)=\#(m,e)/\sum _{e^{\prime }}\#(m,e^{\prime })$ . Examining these distributions reveals many mentions belong to two extremes – either they have very little ambiguity, or they appear in the dataset only a handful of times and refer to different entities only a couple of times each. We deem the former to be less interesting for the purpose of NED, and suspect the latter to be noise with high probability. To filter these cases, we keep only mentions for which at least two different entities have 10 mentions each ( $\#(m,e) \ge 10$ ) and consist of at least 10% of occurrences ( $P(e|m) \ge 0.1$ ). This procedure aggressively filters our dataset and we are left with $e$0 mentions.
Finally, we randomly split the data into train (80%), validation (10%), and test (10%), according to website domains in order to minimize lexical memorization BIBREF18 .
Algorithm
Our DNN model is a discriminative model which takes a pair of local context and candidate entity, and outputs a probability-like score for the candidate entity being correct. Both words and entities are represented using embedding dictionaries and we interpret local context as a window-of-words to the left and right of a mention. The left and right contexts are fed into a duo of Attention-RNN (ARNN) components which process each side and produce a fixed length vector representation. The resulting vectors are concatenated and along with the entity embedding are and then fed into a classifier network with two output units that are trained to emit a probability-like score of the candidate being a correct or corrupt assignment.
Model Architecture
Figure 1 illustrates the main components of our architecture: an embedding layer, a duo of ARNNs, each processing one side of the context (left and right), and a classifier.
The embedding layer first embeds both the entity and the context words as vectors (300 dimensions each).
The ARNN unit is composed from an RNN and an attention mechanism. Equation 10 represents the general semantics of an RNN unit. An RNN reads a sequence of vectors $\lbrace v_t\rbrace $ and maintains a hidden state vector $\lbrace h_t\rbrace $ . At each step a new hidden state is computed based on the previous hidden state and the next input vector using some function $f$ , and an output is computed using $g$ . This allows the RNN to “remember” important signals while scanning the context and to recognize signals spanning multiple words.
$$\begin{aligned} & h_t=f_{\Theta _1}(h_{t-1}, v_t) \\ & o_t=g_{\Theta _2}(h_t) \end{aligned}$$ (Eq. 10)
Our implementation uses a standard GRU unit BIBREF19 as an RNN. We fit the RNN unit with an additional attention mechanism, commonly used with state-of-the-art encoder-decoder models BIBREF20 , BIBREF21 . Since our model lacks a decoder, we use the entity embedding as a control signal for the attention mechanism.
Equation 11 details the equations governing the attention model.
$$\begin{aligned} & a_t \in \mathbb {R}; a_t=r_{\Theta _3}(o_t, v_{candidate}) \\ & a^{\prime }_t = \frac{1}{\sum _{i=1}^{t} \exp \lbrace a_i\rbrace } \exp \lbrace a_t\rbrace \\ & o_{attn}=\sum _{t} a^{\prime }_t o_t \end{aligned}$$ (Eq. 11)
The function $r$ computes an attention value at each step, using the RNN output $o_t$ and the candidate entity $v_{candidate}$ . The final output vector $o_{attn}$ is a fixed-size vector, which is the sum of all the output vectors of the RNN weighted according to the attention values. This allows the attention mechanism to decide on the importance of different context parts when examining a specific candidate. We follow bahdanau2014neural and parametrize the attention function $r$ as a single layer NN as shown in equation 12 .
$$r_{\Theta _3}(o_t, v_{candidate}) = Ao_t + Bv_{candidate} + b \\$$ (Eq. 12)
The classifier network consists of a hidden layer and an output layer with two output units in a softmax. The output units are trained by optimizing a cross-entropy loss function.
Training
We assume our model is only given training examples for correct entity assignments and therefore use corrupt-sampling, where we automatically generate examples of wrong assignments. For each context-entity pair $(c,e)$ , where $e$ is the correct assignment for $c$ , we produce $k$ corrupt examples with the same context $c$ but with a different, corrupt entity $e^{\prime }$ . We considered two alternatives for corrupt sampling and provide an empirical comparison of the two approaches (Section "Evaluation" ):
Near-Misses: Sampling out of the candidate set of each mention. We have found this to be more effective where the training data reliably reflects the test-set distribution.
All-Entity: Sampling from the entire dictionary of entities. Better suited to cases where the training data or candidate generation does not reflect the test-set well. Has an added benefit of allowing us to utilize unambiguous training examples where only a single candidate is found.
We sample corrupt examples uniformly in both alternatives since with uniform sampling the ratio between the number of positive and negative examples of an entity is higher for popular entities, thus biasing the network towards popular entities. In the All-Entity case, this ratio is approximately proportional to the prior probability of the entity.
We note that preliminary experiments revealed that corrupt-sampling according to the distribution of entities in the dataset (as is done by Mikolov at el. mikolov2013distributed), rather than uniform sampling, did not perform well in our settings due to the lack of biasing toward popular entities.
Model optimization was carried out using standard backpropagation and an AdaGrad optimizer BIBREF22 . We allowed the error to propagate through all parts of the network and fine tune all trainable parameters, including the word and entity embeddings themselves. We found the performance of our model substantially improves for the first few epochs and then continues to slowly converge with marginal gains, and therefore trained all models for 8 epochs with $k=5$ for corrupt-sampling.
Embedding Initialization
Training our model implicitly embeds the vocabulary of words and collection of entities in a common space. However, we found that explicitly initializing these embeddings with vectors pre-trained over a large collection of unlabeled data significantly improved performance (see Section "Effects of initialized embeddings and corrupt-sampling schemes" ). To this end, we implemented an approach based on the Skip-Gram with Negative-Sampling (SGNS) algorithm by mikolov2013distributed that simultaneously trains both word and entity vectors.
We used word2vecf BIBREF23 , which allows one to train word and context embeddings using arbitrary definitions of "word" and "context" by providing a dataset of word-context pairs $(w,c)$ , rather than a textual corpus. In our usage, we define a context as an entity $e$ . To compile a dataset of $(w,e)$ pairs, we consider every word $w$ that appeared in the Wikipedia article describing entity $e$ . We limit our vocabularies to words that appeared at least 20 times in the corpus and entities that contain at least 20 words in their articles. We ran the process for 10 epochs and produced vectors of 300 dimensions; other hyperparameters were set to their defaults.
levy2014neural showed that SGNS implicitly factorizes the word-context PMI matrix. Our approach is doing the same for the word-entity PMI matrix, which is highly related to the word-entity TFIDF matrix used in Explicit Semantic Analysis BIBREF24 .
Evaluation
In this section, we describe our experimental setup and compare our model to the state of the art on two datasets: our new WikilinksNED dataset, as well as the commonly-used CoNLL-YAGO dataset BIBREF1 . We also examine the effect of different corrupt-sampling schemes, and of initializing our model with pre-trained word and entity embeddings.
In all experiments, our model was trained with fixed-size left and right contexts (20 words in each side). We used a special padding symbol when the actual context was shorter than the window. Further, we filtered stopwords using NLTK's stop-word list prior to selecting the window in order to focus on more informative words. Our model was implemented using the Keras BIBREF25 and Tensorflow BIBREF26 libraries.
WikilinksNED
we use Near-Misses corrupt-sampling which was found to perform well due to a large training set that represents the test set well.
To isolate the effect of candidate generation algorithms, we used the following simple method for all systems: given a mention $m$ , consider all candidate entities $e$ that appeared as the ground-truth entity for $m$ at least once in the training corpus. This simple method yields $97\%$ ground-truth recall on the test set.
Since we are the first to evaluate NED algorithms on WikilinksNED, we ran a selection of existing local NED systems and compared their performance to our algorithm's.
Yamada et al. yamada2016joint created a state-of-the-art NED system that models entity-context similarity with word and entity embeddings trained using the skip-gram model. We obtained the original embeddings from the authors, and trained the statistical features and ranking model on the WikilinksNED training set. Our configuration of Yamada et al.'s model used only their local features.
Cheng et al. Cheng2013 have made their global NED system publicly available. This algorithm uses GLOW BIBREF10 for local disambiguation. We compare our results to the ranking step of the algorithm, without the global component. Due to the long running time of this system, we only evaluated their method on the smaller test set, which contains 10,000 randomly sampled instances from the full 320,000-example test set.
Finally, we include the Most Probable Sense (MPS) baseline, which selects the entity that was seen most with the given mention during training.
We used standard micro P@1 accuracy for evaluation. Experimental results comparing our model with the baselines are reported in Table 1 . Our RNN model significantly outperforms Yamada at el. on this data by over 5 points, indicating that the more expressive RNNs are indeed beneficial for this task. We find that the attention mechanism further improves our results by a small, yet statistically significant, margin.
CoNLL-YAGO
CoNLL-YAGO has a training set with 18505 non-NIL mentions, which our experiments showed is not sufficient to train our model on. To fit our model to this dataset we first used a simple domain adaptation technique and then incorporated a number of basic statistical and string based features.
We used a simple domain adaptation technique where we first trained our model on an available large corpus of label data derived from Wikipedia, and then trained the resulting model on the smaller training set of CoNLL BIBREF27 . The Wikipedia corpus was built by extracting all cross-reference links along with their context, resulting in over 80 million training examples. We trained our model with All-Entity corrupt sampling for 1 epoch on this data. The resulting model was then adapted to CoNLL-YAGO by training 1 epoch on CoNLL-YAGO's training set, where corrupt examples were produced by considering all possible candidates for each mention as corrupt-samples (Near-Misses corrupt sampling).
We proceeded to use the model in a similar setting to yamada2016joint where a Gradient Boosting Regression Tree (GBRT) BIBREF28 model was trained with our model's prediction as a feature along with a number of statistical and string based features defined by Yamada. The statistical features include entity prior probability, conditional probability, number of candidates for the given mention and maximum conditional probability of the entity in the document. The string based features include edit distance between mention and entity title and two boolean features indicating whether the entity title starts or ends with the mention and vice versa. The GBRT model parameters where set to the values reported as optimal by Yamada.
For comparability with existing methods we used two publicly available candidates datasets: (1) PPRforNED - Pershina at el. pershina2015personalized; (2) YAGO - Hoffart at el. hoffart2011robust.
As a baseline we took the standard Most Probable Sense (MPS) prediction, which selects the entity that was seen most with the given mention during training. We also compare to the following papers - Francis-Landau et al. francis2016capturing, Yamada at el. yamada2016joint, and Chisholm et al. chisholm2015entity, as they are all strong local approaches and a good source for comparison.
Table 2 displays the micro and macro P@1 scores on CoNLL-YAGO test-b for the different training steps. We find that when using only the training set of CoNLL-YAGO our model is under-trained and that the domain adaptation significant boosts performance. We find that incorporating extra statistical and string features yields a small extra improvement in performance.
The final micro and macro P@1 scores on CoNLL-YAGO test-b are displayed in table 3 . On this dataset our model achieves comparable results, however it does not outperform the state-of-the-art, probably because of the relatively small training set and our reliance on domain adaptation.
Effects of initialized embeddings and corrupt-sampling schemes
We performed a study of the effects of using pre-initialized embeddings for our model, and of using either All-Entity or Near-Misses corrupt-sampling. The evaluation was done on a $10\%$ sample of the evaluation set of the WikilinksNED corpus and can be seen in Table 4 .
We have found that using pre-initialized embeddings results in significant performance gains, due to the better starting point. We have also found that using Near-Misses, our model achieves significantly improved performance. We attribute this difference to the more efficient nature of training with near misses. Both these results were found to be statistically significant.
Error Analysis
We randomly sampled and manually analyzed 200 cases of prediction errors made by our model. This set was obtained from WikilinksNED's validation set that was not used for training.
Working with crowd-sourced data, we expected some errors to result from noise in the ground truths themselves. Indeed, we found that $19.5$ % (39/200) of the errors were not false, out of which $5\%$ (2) where wrong labels, $33\%$ (13) were predictions with an equivalent meaning as the correct entity, and in $61.5\%$ (24) our model suggested a more convincing solution than the original author by using specific hints from the context. In this manner, the mention 'Supreme leader' , which was contextually associated to the Iranian leader Ali Khamenei, was linked by our model with 'supreme leader of Iran' while the "correct" tag was the general 'supreme leader' entity.
In addition, $15.5\%$ (31/200) were cases where a Wikipedia disambiguation-page was either the correct or predicted entity ( $2.5\%$ and $14\%$ , respectively). We considered the rest of the 130 errors as true semantic errors, and analyzed them in-depth.
First, we noticed that in $31.5$ % of the true errors (41/130) our model selected an entity that can be understood as a specific ( $6.5$ %) or general (25%) realization of the correct solution. For example, instead of predicting 'Aroma of wine' for a text on the scent and flavor of Turkish wine, the model assigned the mention 'Aroma' with the general 'Odor' entity. We observed that in 26% (34/130) of the error cases, the predicted entity had a very strong semantic relationship to the correct entity. A closer look discovered two prominent types of 'almost correct' errors occurred repeatedly in the data. The first was a film/book/theater type of error ( $8.4$ %), where the actual and the predicted entities were a different display of the same narrative. Even though having different jargon and producers, those fields share extremely similar content, which may explain why they tend to be frequently confused by the algorithm. A third (4/14) of those cases were tagged as truly ambiguous even for human reader. The second prominent type of 'almost correct' errors where differentiating between adjectives that are used to describe properties of a nation. Particularity, mentions such as 'Germanic', 'Chinese' and 'Dutch' were falsely assigned to entities that describe language instead of people, and vice versa. We observed this type of mistake in $8.4$ % of the errors (11/130).
Another interesting type of errors where in cases where the correct entity had insufficient training. We defined insufficient training errors as errors where the correct entity appeared less than 10 times in the training data. We saw that the model followed the MPS in 75% of these cases, showing that our model tends to follow the baseline in such cases. Further, the amount of generalization error in insufficient-training conditions was also significant ( $35.7\%$ ), as our model tended to select more general entities.
Conclusions
Our results indicate that the expressibility of attention-RNNs indeed allows us to extract useful features from noisy context, when sufficient amounts of training examples are available. This allows our model to significantly out-perform existing state-of-the-art models. We find that both using pre-initialized embedding vocabularies, and the corrupt-sampling method employed are very important for properly training our model.
However, the gap between results of all systems tested on both CoNLL-YAGO and WikilinksNED indicates that mentions with noisy context are indeed a challenging test. We believe this to be an important real-world scenario, that represents a distinct test-case that fills a gap between existing news-based datasets and the much noisier Twitter data BIBREF29 that has received increasing attention. We find recurrent neural models are a promising direction for this task.
Finally, our error analysis shows a number of possible improvements that should be addressed. Since we use the training set for candidate generation, non-nonsensical candidates (i.e. disambiguation pages) cause our model to err and should be removed from the candidate set. In addition, we observe that lack of sufficient training for long-tail entities is still a problem, even when a large training set is available. We believe this, and some subtle semantic cases (book/movie) can be at least partially addressed by considering semantic properties of entities, such as types and categories. We intend to address these issues in future work. | The authors believe that the Wikilinks corpus contains ground truth annotations while being noisy. They discard mentions that cannot have ground-truth verified by comparison with Wikipedia. |
bd1a3c651ca2b27f283d3f36df507ed4eb24c2b0 | bd1a3c651ca2b27f283d3f36df507ed4eb24c2b0_0 | Q: Is it a neural model? How is it trained?
Text: Introduction
In active machine learning, a learner is able to query an oracle in order to obtain information that is expected to improve performance. Theoretical and empirical results show that active learning can speed acquisition for a variety of learning tasks BIBREF0 . Although impressive, most work on active machine learning has focused on relatively simple types of information requests (most often a request for a supervised label). In contrast, humans often learn by asking far richer questions which more directly target the critical parameters in a learning task. A human child might ask “Do all dogs have long tails?" or “What is the difference between cats and dogs?" BIBREF1 . A long term goal of artificial intelligence (AI) is to develop algorithms with a similar capacity to learn by asking rich questions. Our premise is that we can make progress toward this goal by better understanding human question asking abilities in computational terms BIBREF2 .
To that end, in this paper, we propose a new computational framework that explains how people construct rich and interesting queries within in a particular domain. A key insight is to model questions as programs that, when executed on the state of a possible world, output an answer. For example, a program corresponding to “Does John prefer coffee to tea?” would return True for all possible world states where this is the correct answer and False for all others. Other questions may return different types of answers. For example “How many sugars does John take in his coffee?” would return a number 0, 1, 2, etc. depending on the world state. Thinking of questions as syntactically well-formed programs recasts the problem of question asking as one of program synthesis. We show that this powerful formalism offers a new approach to modeling question asking in humans and may eventually enable more human-like question asking in machines.
We evaluate our model using a data set containing natural language questions asked by human participants in an information-search game BIBREF3 . Given an ambiguous situation or context, our model can predict what questions human learners will ask by capturing constraints in how humans construct semantically meaningful questions. The method successfully predicts the frequencies of human questions given a game context, and can also synthesize novel human-like questions that were not present in the training set.
Related work
Contemporary active learning algorithms can query for labels or causal interventions BIBREF0 , but they lack the representational capacity to consider a richer range of queries, including those expressed in natural language. AI dialog systems are designed to ask questions, yet these systems are still far from achieving human-like question asking. Goal-directed dialog systems BIBREF4 , BIBREF5 , applied to tasks such as booking a table at a restaurant, typically choose between a relatively small set of canned questions (e.g., “How can I help you?”, “What type of food are you looking for?”), with little genuine flexibility or creativity. Deep learning systems have also been developed for visual “20 questions” style tasks BIBREF6 ; although these models can produce new questions, the questions typically take a stereotyped form (“Is it a person?”, “Is it a glove?” etc.). More open-ended question asking can be achieved by non-goal-driven systems trained on large amounts of natural language dialog, such as the recent progress demonstrated in BIBREF7 . However, these approaches cannot capture intentional, goal-directed forms of human question asking.
Recent work has probed other aspects of question asking. The Visual Question Generation (VQG) data set BIBREF8 contains images paired with interesting, human-generated questions. For instance, an image of a car wreck might be paired with the question, “What caused the accident?” Deep neural networks, similar to those used for image captioning, are capable of producing these types of questions after extensive training BIBREF8 , BIBREF9 , BIBREF10 . However, they require large datasets of images paired with questions, whereas people can ask intelligent questions in a novel scenario with no (or very limited) practice, as shown in our task below. Moreover, human question asking is robust to changes in task and goals, while state-of-the-art neural networks do not generalize flexibly in these ways.
The question data set
Our goal was to develop a model of context-sensitive, goal-directed question asking in humans, which falls outside the capabilities of the systems described above. We focused our analysis on a data set we collected in BIBREF3 , which consists of 605 natural language questions asked by 40 human players to resolve an ambiguous game situation (similar to “Battleship”). Players were individually presented with a game board consisting of a 6 $\times $ 6 grid of tiles. The tiles were initially turned over but each could be flipped to reveal an underlying color. The player's goal was to identify as quickly as possible the size, orientation, and position of “ships" (i.e., objects composed of multiple adjacent tiles of the same color) BIBREF11 . Every board had exactly three ships which were placed in non-overlapping but otherwise random locations. The ships were identified by their color S = {Blue, Red, Purple}. All ships had a width of 1, a length of N = {2, 3, 4} and orientation O = {Horizontal, Vertical}. Any tile that did not overlap with a ship displayed a null “water” color (light gray) when flipped.
After extensive instructions about the rules and purpose of the game and a number of practice rounds BIBREF3 , on each of 18 target contexts players were presented with a partly revealed game board (similar to Figure 1 B and 1 C) that provided ambiguous information about the actual shape and location of the ships. They were then given the chance to ask a natural-language question about the configuration. The player's goal was to use this question asking opportunity to gain as much information as possible about the hidden game board configuration. The only rules given to players about questions was that they must be answerable using one word (e.g., true/false, a number, a color, a coordinate like A1 or a row or column number) and no combination of questions was allowed. The questions were recorded via an HTML text box in which people typed what they wanted to ask. A good question for the context in Figure 1 B is “Do the purple and the red ship touch?”, while “What is the color of tile A1?” is not helpful because it can be inferred from the revealed game board and the rules of the game (ship sizes, etc.) that the answer is “Water” (see Figure 3 for additional example questions).
Each player completed 18 contexts where each presented a different underlying game board and partially revealed pattern. Since the usefulness of asking a question depends on the context, the data set consists of 605 question-context pairs $\langle q, c \rangle $ , with 26 to 39 questions per context. The basic challenge for our active learning method is to predict which question $q$ a human will ask from the given context $c$ and the overall rules of the game. This is a particularly challenging data set to model because of the the subtle differences between contexts that determine if a question is potentially useful along with the open-ended nature of human question asking.
A probabilistic model of question generation
Here we describe the components of our probabilistic model of question generation. Section "Compositionality and computability" describes two key elements of our approach, compositionality and computability, as reflected in the choice to model questions as programs. Section "A grammar for producing questions" describes a grammar that defines the space of allowable questions/programs. Section "Probabilistic generative model" specifies a probabilistic generative model for sampling context-sensitive, relevant programs from this space. The remaining sections cover optimization, the program features, and alternative models (Sections "Optimization" - "Alternative models" ).
Compositionality and computability
The analysis of the data set BIBREF3 revealed that many of the questions in the data set share similar concepts organized in different ways. For example, the concept of ship size appeared in various ways across questions:
[noitemsep,nolistsep]
“How long is the blue ship?”
“Does the blue ship have 3 tiles?”
“Are there any ships with 4 tiles?”
“Is the blue ship less then 4 blocks?”
“Are all 3 ships the same size?”
“Does the red ship have more blocks than the blue ship?”
As a result, the first key element of modeling question generation was to recognize the compositionality of these questions. In other words, there are conceptual building blocks (predicates like size(x) and plus(x,y)) that can be put together to create the meaning of other questions (plus(size(Red), size(Purple))). Combining meaningful parts to give meaning to larger expressions is a prominent approach in linguistics BIBREF12 , and compositionality more generally has been an influential idea in cognitive science BIBREF13 , BIBREF14 , BIBREF15 .
The second key element is the computability of questions. We propose that human questions are like programs that when executed on the state of a world output an answer. For example, a program that when executed looks up the number of blue tiles on a hypothesized or imagined Battleship game board and returns said number corresponds to the question “How long is the blue ship?”. In this way, programs can be used to evaluate the potential for useful information from a question by executing the program over a set of possible or likely worlds and preferring questions that are informative for identifying the true world state. This approach to modeling questions is closely related to formalizing question meaning as a partition over possible worlds BIBREF16 , a notion used in previous studies in linguistics BIBREF17 and psychology BIBREF18 . Machine systems for question answering have also fruitfully modeled questions as programs BIBREF19 , BIBREF20 , and computational work in cognitive science has modeled various kinds of concepts as programs BIBREF21 , BIBREF22 , BIBREF23 . An important contribution of our work here is that it tackles question asking and provides a method for generating meaningful questions/programs from scratch.
A grammar for producing questions
To capture both compositionality and computability, we represent questions in a simple programming language, based on lambda calculus and LISP. Every unit of computation in that language is surrounded by parentheses, with the first element being a function and all following elements being arguments to that function (i.e., using prefix notation). For instance, the question “How long is the blue ship?” would be represented by the small program (size Blue). More examples will be discussed below. With this step we abstracted the question representation from the exact choice of words while maintaining its meaning. As such the questions can be thought of as being represented in a “language of thought” BIBREF24 .
Programs in this language can be combined as in the example (> (size Red) (size Blue)), asking whether the red ship is larger than the blue ship. To compute an answer, first the inner parentheses are evaluated, each returning a number corresponding to the number of red or blue tiles on the game board, respectively. Then these numbers are used as arguments to the > function, which returns either True or False.
A final property of interest is the generativity of questions, that is, the ability to construct novel expressions that are useful in a given context. To have a system that can generate expressions in this language we designed a grammar that is context-free with a few exceptions, inspired by BIBREF21 . The grammar consists of a set of rewrite rules, which are recursively applied to grow expressions. An expression that cannot be further grown (because no rewrite rules are applicable) is guaranteed to be an interpretable program in our language.
To create a question, our grammar begins with an expression that contains the start symbol A and then rewrites the symbols in the expression by applying appropriate grammatical rules until no symbol can be rewritten. For example, by applying the rules A $\rightarrow $ N, N $\rightarrow $ (size S), and S $\rightarrow $ Red, we arrive at the expression (size Red). Table SI-1 (supplementary materials) shows the core rewrite rules of the grammar. This set of rules is sufficient to represent all 605 questions in the human data set.
To enrich the expressiveness and conciseness of our language we added lambda expressions, mapping, and set operators (Table SI-2, supplementary material). Their use can be seen in the question “Are all ships the same size?”, which can be conveniently represented by (= (map ( $\lambda $ x (size x)) (set Blue Red Purple))). During evaluation, map sequentially assigns each element from the set to x in the $\lambda $ -part and ultimately returns a vector of the three ship sizes. The three ship sizes are then compared by the = function. Of course, the same question could also be represented as (= (= (size Blue) (size Red)) (size Purple)).
Probabilistic generative model
An artificial agent using our grammar is able to express a wide range of questions. To decide which question to ask, the agent needs a measure of question usefulness. This is because not all syntactically well-formed programs are informative or useful. For instance, the program (> (size Blue) (size Blue)) representing the question “Is the blue ship larger than itself?” is syntactically coherent. However, it is not a useful question to ask (and is unlikely to be asked by a human) because the answer will always be False (“no”), no matter the true size of the blue ship.
We propose a probabilistic generative model that aims to predict which questions people will ask and which not. Parameters of the model can be fit to predict the frequency that humans ask particular questions in particular context in the data set by BIBREF3 . Formally, fitting the generative model is a problem of density estimation in the space of question-like programs, where the space is defined by the grammar. We define the probability of question $x$ (i.e., the probability that question $x$ is asked) with a log-linear model. First, the energy of question $x$ is the weighted sum of question features
$$ \mathcal {E}(x) = \theta _1 f_1(x) + \theta _2 f_2(x) + ... + \theta _K f_K(x),$$ (Eq. 13)
where $\theta _k$ is the weight of feature $f_k$ of question $x$ . We will describe all features below. Model variants will differ in the features they use. Second, the energy is related to the probability by
$$ p(x;\mathbf {\theta }) = \frac{ \exp (-\mathcal {E}(x)) }{ \sum _{x \in X} \exp (-\mathcal {E}(x)) } = \frac{ \exp (-\mathcal {E}(x)) }{ Z },$$ (Eq. 14)
where $\mathbf {\theta }$ is the vector of feature weights, highlighting the fact that the probability is dependent on a parameterization of these weights, $Z$ is the normalizing constant, and $X$ is the set of all possible questions that can be generated by the grammar in Tables SI-1 and SI-2 (up to a limit on question length). The normalizing constant needs to be approximated since $X$ is too large to enumerate.
Optimization
The objective is to find feature weights that maximize the likelihood of asking the human-produced questions. Thus, we want to optimize
$$\operatornamewithlimits{arg\,max}_{\mathbf {\theta }} \, \sum _{i = 1}^{N} \text{log}\,p(d^{(i)}; \mathbf {\theta }),$$ (Eq. 17)
where $D = \lbrace d^{(1)},...,d^{(N)}\rbrace $ are the questions (translated into programs) in the human data set. To optimize via gradient ascent, we need the gradient of the log-likelihood with respect to each $\theta _k$ , which is given by
$$\frac{\partial \text{log}\,p(D;\mathbf {\theta })}{\partial \theta _k}= N \, \mathbb {E}_{x \sim D}[f_k(x)] - N \, \mathbb {E}_{x \sim P_\theta }[f_k(x)].$$ (Eq. 18)
The term $\mathbb {E}_{x \sim D}[f_k(x)] = \frac{1}{N}\sum _{i=1}^{N}f_k(d^{(i)})$ is the expected (average) feature values given the empirical set of human questions. The term $\mathbb {E}_{x \sim P_\theta }[f_k(x)] = \sum _{x \in X} f_k(x) p(x;\mathbf {\theta })$ is the expected feature values given the model. Thus, when the gradient is zero, the model has perfectly matched the data in terms of the average values of the features.
Computing the exact expected feature values from the model is intractable, since there is a very large number of possible questions (as with the normalizing constant in Equation 14 ). We use importance sampling to approximate this expectation. To create a proposal distribution, denoted as $q(x)$ , we use the question grammar as a probabilistic context free grammar with uniform distributions for choosing the re-write rules.
The details of optimization are as follows. First, a large set of 150,000 questions is sampled in order to approximate the gradient at each step via importance sampling. Second, to run the procedure for a given model and training set, we ran 100,000 iterations of gradient ascent at a learning rate of 0.1. Last, for the purpose of evaluating the model (computing log-likelihood), the importance sampler is also used to approximate the normalizing constant in Eq. 14 via the estimator $Z \approx \mathbb {E}_{x\sim q}[\frac{p(x;\mathbf {\theta })}{q(x)}]$ .
Question features
We now turn to describe the question features we considered (cf. Equation 13 ), namely two features for informativeness, one for length, and four for the answer type.
Informativeness. Perhaps the most important feature is a question's informativeness, which we model through a combination of Bayesian belief updating and Expected Information Gain (EIG). To compute informativeness, our agent needs to represent several components: A belief about the current world state, a way to update its belief once it receives an answer, and a sense of all possible answers to the question. In the Battleship game, an agent must identify a single hypothesis $h$ (i.e., a hidden game board configuration) in the space of possible configurations $H$ (i.e., possible board games). The agent can ask a question $x$ and receive the answer $d$ , updating its hypothesis space by applying Bayes' rule, $p(h|d;x) \propto p(d|h;x)p(h)$ . The prior $p(h)$ is specified first by a uniform choice over the ship sizes, and second by a uniform choice over all possible configurations given those sizes. The likelihood $p(d|h;x) \propto 1$ if $d$ is a valid output of the question program $x$ when executed on $h$ , and zero otherwise.
The Expected Information Gain (EIG) value of a question $x$ is the expected reduction in uncertainty about the true hypothesis $h$ , averaged across all possible answers $A_x$ of the question
$$\mathit {EIG}(x) = \sum _{d \in A_x} p(d;x) \Big [ I[p(h)] - I[p(h|d;x)] \Big ],$$ (Eq. 22)
where $I[\cdot ]$ is the Shannon entropy. Complete details about the Bayesian ideal observer follow the approach we used in BIBREF3 . Figure 3 shows the EIG scores for the top two human questions for selected contexts.
In addition to feature $f_\text{EIG}(x) = \text{EIG}(x)$ , we added a second feature $f_\text{EIG=0}(x)$ , which is 1 if EIG is zero and 0 otherwise, to provide an offset to the linear EIG feature. Note that the EIG value of a question always depends on the game context. The remaining features described below are independent of the context.
Complexity. Purely maximizing EIG often favors long and complicated programs (e.g., polynomial questions such as size(Red)+10*size(Blue)+100*size(Purple)+...). Although a machine would not have a problem with answering such questions, it poses a problem for a human answerer. Generally speaking, people prefer concise questions and the rather short questions in the data set reflect this. The probabilistic context free grammar provides a measure of complexity that favors shorter programs, and we use the log probability under the grammar $f_\text{comp}(x) = -\log q(x)$ as the complexity feature.
Answer type. We added four features for the answer types Boolean, Number, Color, and Location. Each question program belongs to exactly one of these answer types (see Table SI-1). The type Orientation was subsumed in Boolean, with Horizontal as True and Vertical as False. This allows the model to capture differences in the base rates of question types (e.g., if people prefer true/false questions over other types).
Relevance. Finally, we added one auxiliary feature to deal with the fact that the grammar can produce syntactically coherent programs that have no reference to the game board at all (thus are not really questions about the game; e.g., (+ 1 1)). The “filter” feature $f_\emptyset (x)$ marks questions that refer to the Battleship game board with a value of 1 (see the $^b$ marker in Table SI-1) and 0 otherwise.
Alternative models
To evaluate which features are important for human-like question generation, we tested the full model that uses all features, as well as variants in which we respectively lesioned one key property. The information-agnostic model did not use $f_\text{EIG}(x)$ and $f_\text{EIG=0}(x)$ and thus ignored the informativeness of questions. The complexity-agnostic model ignored the complexity feature. The type-agnostic model ignored the answer type features.
Results and Discussion
The probabilistic model of question generation was evaluated in two main ways. First, it was tasked with predicting the distribution of questions people asked in novel scenarios, which we evaluate quantitatively. Second, it was tasked with generating genuinely novel questions that were not present in the data set, which we evaluate qualitatively. To make predictions, the different candidate models were fit to 15 contexts and asked to predict the remaining one (i.e., leave one out cross-validation). This results in 64 different model fits (i.e., 4 models $\times $ 16 fits).
First, we verify that compositionality is an essential ingredient in an account of human question asking. For any given context, about 15% of the human questions did not appear in any of the other contexts. Any model that attempts to simply reuse/reweight past questions will be unable to account for this productivity (effectively achieving a log-likelihood of $-\infty $ ), at least not without a much larger training set of questions. The grammar over programs provides one account of the productivity of the human behavior.
Second, we compared different models on their ability to quantitatively predict the distribution of human questions. Table 1 summarizes the model predictions based on the log-likelihood of the questions asked in the held-out contexts. The full model – with learned features for informativeness, complexity, answer type, and relevance – provides the best account of the data. In each case, lesioning its key components resulted in lower quality predictions. The complexity-agnostic model performed far worse than the others, highlighting the important role of complexity (as opposed to pure informativeness) in understanding which questions people choose to ask. The full model also outperformed the information-agnostic and type-agnostic models, suggesting that people also optimize for information gain and prefer certain question types (e.g., true/false questions are very common). Because the log-likelihood values are approximate, we bootstrapped the estimate of the normalizing constant $Z$ and compared the full model and each alternative. The full model's log-likelihood advantage over the complexity-agnostic model held in 100% of the bootstrap samples, over the information-agnostic model in 81% of samples, and over type-agnostic model in 88%.
Third, we considered the overall match between the best-fit model and the human question frequencies. Figure 2 shows the correlations between the energy values according to the held-out predictions of the full model (Eq. 13 ) and the frequencies of human questions (e.g., how often participants asked “What is the size of the red ship?" in a particular context). The results show very strong agreement for some contexts along with more modest alignment for others, with an average Spearman's rank correlation coefficient of 0.64. In comparison, the information-agnostic model achieved 0.65, the complexity-agnostic model achieved -0.36, and the type-agnostic model achieved 0.55. One limitation is that the human data is sparse (many questions were only asked once), and thus correlations are limited as a measure of fit. However, there is, surprisingly, no correlation at all between question generation frequency and EIG alone BIBREF3 , again suggesting a key role of question complexity and the other features.
Last, the model was tasked with generating novel, “human-like” questions that were not part of the human data set. Figure 3 shows five novel questions that were sampled from the model, across four different game contexts. Questions were produced by taking five weighted samples from the set of programs produced in Section "Optimization" for approximate inference, with weights determined by their energy (Eq. 14 ). To ensure novelty, samples were rejected if they were equivalent to any human question in the training data set or to an already sampled question. Equivalence between any two questions was determined by the mutual information of their answer distributions (i.e., their partitions over possible hypotheses), and or if the programs differed only through their arguments (e.g. (size Blue) is equivalent to (size Red)). The generated questions in Figure 3 demonstrate that the model is capable of asking novel (and clever) human-like questions that are useful in their respective contexts. Interesting new questions that were not observed in the human data include “Are all the ships horizontal?" (Context 7), “What is the top left of all the ship tiles?" (Context 9), “Are blue and purple ships touching and red and purple not touching (or vice versa)?" (Context 9), and “What is the column of the top left of the tiles that have the color of the bottom right corner of the board?" (Context 15). The four contexts were selected to illustrate the creative range of the model, and the complete set of contexts is shown in the supplementary materials.
Conclusions
People use question asking as a cognitive tool to gain information about the world. Although people ask rich and interesting questions, most active learning algorithms make only focused requests for supervised labels. Here were formalize computational aspects of the rich and productive way that people inquire about the world. Our central hypothesis is that active machine learning concepts can be generalized to operate over a complex, compositional space of programs that are evaluated over possible worlds. To that end, this project represents a step toward more capable active learning machines.
There are also a number of limitations of our current approach. First, our system operates on semantic representations rather than on natural language text directly, although it is possible that such a system can interface with recent tools in computational linguistics to bridge this gap BIBREF19 . Second, some aspects of our grammar are specific to the Battleship domain. It is often said that some knowledge is needed to ask a good question, but critics of our approach will point out that the model begins with substantial domain knowledge and special purpose structures. On the other hand, many aspects of our grammar are domain general rather than domain specific, including very general functions and programming constructs such as logical connectives, set operations, arithmetic, and mapping. To extend this approach to new domains, it is unclear exactly how much new knowledge engineering will be needed, and how much can be preserved from the current architecture. Future work will bring additional clarity as we extend our approach to different domains.
From the perspective of computational cognitive science, our results show how people balance informativeness and complexity when producing semantically coherent questions. By formulating question asking as program generation, we provide the first predictive model to date of open-ended human question asking.
Acknowledgments
We thank Chris Barker, Sam Bowman, Noah Goodman, and Doug Markant for feedback and advice. This research was supported by NSF grant BCS-1255538, the John Templeton Foundation “Varieties of Understanding” project, a John S. McDonnell Foundation Scholar Award to TMG, and the Moore-Sloan Data Science Environment at NYU.
Supplementary material
The supplementary material contains the following: the game boards that served as contexts in the human question data set (Figure SI-1 ), the full set of grammatical rules used in the simulations (Table SI-1 & SI-2 ), and five novel questions for each context produced by the computational model (Table SI-3 & SI-4 ). | No, it is a probabilistic model trained by finding feature weights through gradient ascent |
5a2c0c55a43dcc0b9439d330d2cbe1d5d444bf36 | 5a2c0c55a43dcc0b9439d330d2cbe1d5d444bf36_0 | Q: How do people engage in Twitter threads on different types of news?
Text: Introduction
Twitter is a social network that has been used worldwide as a means of news spreading. In fact, more than 85% of its users use Twitter to be updated with news, and do so on a daily basis BIBREF0. Users behaviour of this social network has been found efficient in electronic word-of-mouth processes BIBREF1, which is a key component for the quick spreading of breaking news. This would lead to think that news-related content occupies the majority of the tweets volume. However, on average, the proportion of news-related content to the total content of tweets is 1% worldwide, but increases dramatically (up to 15%) in countries in conflict BIBREF2. An extrapolation of these findings indicates that Colombia might have a high content of news-related tweets, since it is well-known that Colombia is one of the most violent countries in the world, and has been for decades BIBREF3.
On the other hand, the virality or importance of a tweet conveying news-related information is a relevant measure of what is critical for a community. Therefore, the study of news spreading in a community gives a clear idea of citizens interactions around central topics of interest. Particularly, we are interested in examining how people react to news related to security, crime and violence because this would expose the mechanisms of collective reactions of rejection, acceptance, conflict, among others. This has been considered in case-studies such as Ref. BIBREF4, where messages containing hate or violent speech were identified after Charlie Hebdo's famous shooting, allowing researchers to identify spatio-temporal and textual patterns in the produced tweets after the mentioned disruptive event. Other similar case-studies include the analysis of how people react to homicides in London BIBREF5, to polio health news BIBREF6, and to the aftermath of violence on college campuses BIBREF7. Also, social networks and technology have been signalled as tools used by young people to inflict violent acts against others BIBREF8, BIBREF9, BIBREF10. On a more general ground, the study of these individual or collective reactions is a problem tackled by sentiment analysis, whose objective is to determine whether the sentiment contained in a text is positive or negative, and to what extent BIBREF11, BIBREF12, BIBREF13, BIBREF14. Applied to security-related content in social networks, sentiment analysis could be important in designing and implementing public policies regarding security, crime and violence, as well as educational campaigns where people are taught to communicate their opinions in a non-violent way. However, in order to achieve this, it is desirable to segment news-related tweets so that different topics can be differentiated from one another as we expect Twitter users to react quite differently depending on the security topic they react to. This field is known as topic discovery.
Several proposals for topic discovery are available, among which many Latent Dirichlet Allocation variants and modifications are available. For instance, Ref. BIBREF15 presents an LDA-based model that relates the topic of a scientific paper with the content of the documents that it cites. The proposed algorithm allows to know the evolution of a research topic by measuring whether a topic is important (as seen by the scientific community) or not. Furthermore, Ref. BIBREF16 used LDA with variational Gibbs sampling to found general terms that associate the reviews of users in e-commerce web sites. This with the intention of improving the experience of new users. Moreover, we have recently combined word-embedding methods and K-means to discover topics and have good interpretability results BIBREF17, BIBREF18.
Thus, in this paper we exemplify a method for topic discovery applied to Colombian news-related tweets that is accurate in the task of segmenting tweets. This method can work at different granularity levels depending on the corpus to be analysed. In this case, we have a corpus of security-related tweets, so that the method will group tweets into the different sub-topics such as murders, robberies, among others. The workings of the method will be detailed in section sec:methods, and the main results will be presented in Section sec:results. Finally, we provide some conclusions in Section sec:conclusions.
Methods and Materials
In this section, we describe the dataset used in our research, as well as the methods to perform fine-grained latent topic analysis to process all the data. The method is largely based on our previous work BIBREF17.
Tweets from Colombian news Twitter account @NoticiasRCN were collected from 2014 to the present. A total of 258,848 tweets were published by @NoticiasRCN in this period. The method hereafter mentioned was applied in Ref. BIBREF17 to this large corpus at a coarse-grained scale to discover news topics, allowing us to pinpoint groups of tweets sharing semantic content. It was possible to detect tweets regarding to politics, sports, Colombian armed conflict, extreme violence, organised/common crime, among others. In this paper, we focus on the groups of extreme violence and organised/common crime, which contain a total of 47,229 tweets, accounting for an 18.2% of all published news. We excluded the Colombian armed conflict, as this is not normally connected to events occurring in cities.
We pre-processed these tweets related to security, crime and violence by removing punctuation, links, hashtags and mentions, we lowercased the text and performed lemmatisation with spaCy's adapted Spanish lemmatiser.
As the objective of our work is to find a number of topics and its members that are helpful for further analysis, the first task to solve is to methodologically find the number of topics. Our proposal is to combine a topic modelling tool with a measure of how well this tool performed. Therefore, we trained a Latent Dirichlet Allocation (LDA) model BIBREF19 to soft-cluster tweets into topics and then used $C_V$ coherence BIBREF20, BIBREF21 to measure the interpretability of the LDA results. What LDA does is to assign to documents probabilities to belong to different topics (an integer number $k$ provided by the user), where these probabilities depend on the occurrence of words which are assumed to co-occur in documents belonging to the same topic. This assumption is called a sparse Dirichlet prior. Thus, LDA exploits the fact that even if a word belongs to many topics, occurring in them with different probabilities, they co-occur with neighbouring words in each topic with other probabilities that help to better define the topics. The best number of topics is the number of topics that helps the most human interpretability of the topics. This means that if the topics given by LDA can be well-distinguished by humans, then the corresponding number of topics is acceptable. As mentioned before, a way of measuring this interpretability is the calculation of $C_V$ coherence, which, to the knowledge of the authors, is the measure with largest correlation to human interpretability. The optimum number of topics can be found at the maximum of $C_V(k)$.
Once the number of topics has been determined, we proceed to find vector-embedding representations of tweets, as they have been previously shown to yield superior results in topic modelling with respect to LDA BIBREF22. Here, we use the word2vec-based BIBREF23, BIBREF24, BIBREF25 FastText model BIBREF26, which essentially uses sub-word information to enrich embeddings generated by a neural network that predicts neighbouring words. The job of FastText is to reduce the dimensionality of one-hot-encoded words (which may be in very large vector spaces of the size of the corpus vocabulary) to a low-dimension and dense vector space (of dimension $N$, selected by the user), where dimensions store highly correlated semantic relations between words and strings of characters. Here a tweet is represented by the sum of the individual vector representations of each word in the tweet.
In this low-dimensional vector space, K-means clustering BIBREF27 is performed for $k$ clusters (i.e. the number of topics found with the LDA-$C_V$ coherence method) in order to hard-cluster the vector representations of the tweets. K-means is a common clustering technique that minimises within-cluster dispersion. Each cluster contains tweets belonging to the same topic.
In order to visualise the clusters and interpret their contents, we performed dimensionality reduction with the Uniform Manifold Approximation and Projection (UMAP) method BIBREF28 to plot vectors onto a 2 dimensional space. UMAP learns the topology of the data to be reduced in dimension by learning a projection of this data onto a lower-dimensional space where the projection preserves, as much as possible, the fuzzy topological structure of the manifold described by the vectors. Additionally, the Python's Bokeh library BIBREF29 was used to create interactive plots of the UMAP reduced representation of FastText vectors, allowing us to quickly examine the structure of the clusters, as well as to read representative tweets of each cluster to determine and label their corresponding topics.
Results and Discussion
Since LDA is a probabilistic method that starts its learning with some random parameters, we measured $C_V$ coherence 64 times for each number of topics $k$ ranging from 2 to 59, and averaged the measurements. The results are shown in fig:cv. It is important to note that a maximum is not reached. However, a saturation of the $C_V$ coherence takes place, making it difficult to select a number of topics. By visual inspection and the use of the so-called elbow method, we pick two different numbers of topics (10 and 16) and analyse them separately in order to determine what the best number of topics is.
Then, the FastText method was trained using a 30-dimensional vector embedding space. In fig:cluste16 we plot the UMAP-reduced vectors for 16 clusters that are at a maximum Euclidean distance of 0.2 (arbitrary units) from their respective cluster centroid, found through K-means clustering. We manually labelled the clusters by reading the 15 most representative tweets of each cluster, i.e. the ones closest to the corresponding cluster centroid. Some clusters were difficult to label, particularly those that clearly overlap with other clusters in the visualisation.
The distribution of tweets along the 16 topics is shown in fig:piesixteen, where news were slightly concentrated on security-related sub-topics like activities in the city, citizen actions against crime, victim stories and common crime. The within-cluster dispersion is comparable between topics, implying that clusters are equally diffused.
Regarding the case of only 10 clusters, in fig:tenclusters we plot the UMAP-reduced vectors that are most representative of each cluster, just as in fig:cluste16. We find that this number of topics is better than 16 clusters, since the identification of the topic was clearer in the case of 10 clusters when reading the 15 most representative tweets of each cluster. This holds true even for three different topics referring to traffic, as they topics can be well-differentiated (note that UMAP plots those groups close one another because they all contain traffic-related tweets). For instance in the traffic and urban planning cluster we find news such as
Bogotá's Secretaría de Movilidad talks about changes that users will find in public transportation this Monday
which is not directly related with security, crime or violence topics. On the other hand, in the traffic accidents cluster we find news such as
In Cartagena, two bus drivers left their vehicles in the middle of the highway to solve their differences by fighting against each other.
Finally, in the events in the traffic cluster, an example of a news tweet is
North highway is collapsed by a triple-crash. The air patroller recommends to take alternate routes.
These tweets exemplify the difference of the three traffic-related clusters. Moreover, by comparing fig:cluste16,fig:tenclusters, it is clear that some clusters are barely changed when increasing the number of topics from 10 to 16 because they are well-defined topics that can be interpreted easily from the human perspective.
Finally, the distribution of the tweets along the 10 different topics are shown in fig:pieten. We found in this number of topics, that the clusters correspond to more general topics, allowing a better separation of the tweets. It is noteworthy to state that the “others” cluster mixes different security problems such as aggression against animals, government infringements to provide services to people, among others.
Conclusions
In this paper we presented the application of a latent topic discovery method at a fine-grained level to segment Colombian news published through Twitter in different sub-topics regarding security, crime and violence. We were able to find interpretable groups of tweets published by news-media giant @NoticiasRCN, where each group referred to different sorts of security, crime and violence issues.
We identified clear labels that summarise the content of the tweets belonging to each topic: attacks and aggressions, traffic and urban planning, thefts, traffic accidents, social problems, events in the traffic, violence against children and women, medical negligence, murders and others. An important application of the methodology presented in this paper is to detect violent events that go unreported to the police. Furthermore, the tools that were shown configure a critical channel for monitoring violent actions that attempt against the security of women, children, minorities and crime victims. Moreover, our method allows the automatic classification of new security-related tweets.
Our method contributes to the segmentation of tweets to better address issues in each security front. What we will develop in future work is the characterisation of people's reaction to different types of security, crime and violence related issues, and to identify violent behaviour in social networks, as this is a cornerstone to understanding social and cultural dynamics in our communities. | Unanswerable |
0c78d2fe8bc5491b5fd8a2166190c59eba069ced | 0c78d2fe8bc5491b5fd8a2166190c59eba069ced_0 | Q: How are the clusters related to security, violence and crime identified?
Text: Introduction
Twitter is a social network that has been used worldwide as a means of news spreading. In fact, more than 85% of its users use Twitter to be updated with news, and do so on a daily basis BIBREF0. Users behaviour of this social network has been found efficient in electronic word-of-mouth processes BIBREF1, which is a key component for the quick spreading of breaking news. This would lead to think that news-related content occupies the majority of the tweets volume. However, on average, the proportion of news-related content to the total content of tweets is 1% worldwide, but increases dramatically (up to 15%) in countries in conflict BIBREF2. An extrapolation of these findings indicates that Colombia might have a high content of news-related tweets, since it is well-known that Colombia is one of the most violent countries in the world, and has been for decades BIBREF3.
On the other hand, the virality or importance of a tweet conveying news-related information is a relevant measure of what is critical for a community. Therefore, the study of news spreading in a community gives a clear idea of citizens interactions around central topics of interest. Particularly, we are interested in examining how people react to news related to security, crime and violence because this would expose the mechanisms of collective reactions of rejection, acceptance, conflict, among others. This has been considered in case-studies such as Ref. BIBREF4, where messages containing hate or violent speech were identified after Charlie Hebdo's famous shooting, allowing researchers to identify spatio-temporal and textual patterns in the produced tweets after the mentioned disruptive event. Other similar case-studies include the analysis of how people react to homicides in London BIBREF5, to polio health news BIBREF6, and to the aftermath of violence on college campuses BIBREF7. Also, social networks and technology have been signalled as tools used by young people to inflict violent acts against others BIBREF8, BIBREF9, BIBREF10. On a more general ground, the study of these individual or collective reactions is a problem tackled by sentiment analysis, whose objective is to determine whether the sentiment contained in a text is positive or negative, and to what extent BIBREF11, BIBREF12, BIBREF13, BIBREF14. Applied to security-related content in social networks, sentiment analysis could be important in designing and implementing public policies regarding security, crime and violence, as well as educational campaigns where people are taught to communicate their opinions in a non-violent way. However, in order to achieve this, it is desirable to segment news-related tweets so that different topics can be differentiated from one another as we expect Twitter users to react quite differently depending on the security topic they react to. This field is known as topic discovery.
Several proposals for topic discovery are available, among which many Latent Dirichlet Allocation variants and modifications are available. For instance, Ref. BIBREF15 presents an LDA-based model that relates the topic of a scientific paper with the content of the documents that it cites. The proposed algorithm allows to know the evolution of a research topic by measuring whether a topic is important (as seen by the scientific community) or not. Furthermore, Ref. BIBREF16 used LDA with variational Gibbs sampling to found general terms that associate the reviews of users in e-commerce web sites. This with the intention of improving the experience of new users. Moreover, we have recently combined word-embedding methods and K-means to discover topics and have good interpretability results BIBREF17, BIBREF18.
Thus, in this paper we exemplify a method for topic discovery applied to Colombian news-related tweets that is accurate in the task of segmenting tweets. This method can work at different granularity levels depending on the corpus to be analysed. In this case, we have a corpus of security-related tweets, so that the method will group tweets into the different sub-topics such as murders, robberies, among others. The workings of the method will be detailed in section sec:methods, and the main results will be presented in Section sec:results. Finally, we provide some conclusions in Section sec:conclusions.
Methods and Materials
In this section, we describe the dataset used in our research, as well as the methods to perform fine-grained latent topic analysis to process all the data. The method is largely based on our previous work BIBREF17.
Tweets from Colombian news Twitter account @NoticiasRCN were collected from 2014 to the present. A total of 258,848 tweets were published by @NoticiasRCN in this period. The method hereafter mentioned was applied in Ref. BIBREF17 to this large corpus at a coarse-grained scale to discover news topics, allowing us to pinpoint groups of tweets sharing semantic content. It was possible to detect tweets regarding to politics, sports, Colombian armed conflict, extreme violence, organised/common crime, among others. In this paper, we focus on the groups of extreme violence and organised/common crime, which contain a total of 47,229 tweets, accounting for an 18.2% of all published news. We excluded the Colombian armed conflict, as this is not normally connected to events occurring in cities.
We pre-processed these tweets related to security, crime and violence by removing punctuation, links, hashtags and mentions, we lowercased the text and performed lemmatisation with spaCy's adapted Spanish lemmatiser.
As the objective of our work is to find a number of topics and its members that are helpful for further analysis, the first task to solve is to methodologically find the number of topics. Our proposal is to combine a topic modelling tool with a measure of how well this tool performed. Therefore, we trained a Latent Dirichlet Allocation (LDA) model BIBREF19 to soft-cluster tweets into topics and then used $C_V$ coherence BIBREF20, BIBREF21 to measure the interpretability of the LDA results. What LDA does is to assign to documents probabilities to belong to different topics (an integer number $k$ provided by the user), where these probabilities depend on the occurrence of words which are assumed to co-occur in documents belonging to the same topic. This assumption is called a sparse Dirichlet prior. Thus, LDA exploits the fact that even if a word belongs to many topics, occurring in them with different probabilities, they co-occur with neighbouring words in each topic with other probabilities that help to better define the topics. The best number of topics is the number of topics that helps the most human interpretability of the topics. This means that if the topics given by LDA can be well-distinguished by humans, then the corresponding number of topics is acceptable. As mentioned before, a way of measuring this interpretability is the calculation of $C_V$ coherence, which, to the knowledge of the authors, is the measure with largest correlation to human interpretability. The optimum number of topics can be found at the maximum of $C_V(k)$.
Once the number of topics has been determined, we proceed to find vector-embedding representations of tweets, as they have been previously shown to yield superior results in topic modelling with respect to LDA BIBREF22. Here, we use the word2vec-based BIBREF23, BIBREF24, BIBREF25 FastText model BIBREF26, which essentially uses sub-word information to enrich embeddings generated by a neural network that predicts neighbouring words. The job of FastText is to reduce the dimensionality of one-hot-encoded words (which may be in very large vector spaces of the size of the corpus vocabulary) to a low-dimension and dense vector space (of dimension $N$, selected by the user), where dimensions store highly correlated semantic relations between words and strings of characters. Here a tweet is represented by the sum of the individual vector representations of each word in the tweet.
In this low-dimensional vector space, K-means clustering BIBREF27 is performed for $k$ clusters (i.e. the number of topics found with the LDA-$C_V$ coherence method) in order to hard-cluster the vector representations of the tweets. K-means is a common clustering technique that minimises within-cluster dispersion. Each cluster contains tweets belonging to the same topic.
In order to visualise the clusters and interpret their contents, we performed dimensionality reduction with the Uniform Manifold Approximation and Projection (UMAP) method BIBREF28 to plot vectors onto a 2 dimensional space. UMAP learns the topology of the data to be reduced in dimension by learning a projection of this data onto a lower-dimensional space where the projection preserves, as much as possible, the fuzzy topological structure of the manifold described by the vectors. Additionally, the Python's Bokeh library BIBREF29 was used to create interactive plots of the UMAP reduced representation of FastText vectors, allowing us to quickly examine the structure of the clusters, as well as to read representative tweets of each cluster to determine and label their corresponding topics.
Results and Discussion
Since LDA is a probabilistic method that starts its learning with some random parameters, we measured $C_V$ coherence 64 times for each number of topics $k$ ranging from 2 to 59, and averaged the measurements. The results are shown in fig:cv. It is important to note that a maximum is not reached. However, a saturation of the $C_V$ coherence takes place, making it difficult to select a number of topics. By visual inspection and the use of the so-called elbow method, we pick two different numbers of topics (10 and 16) and analyse them separately in order to determine what the best number of topics is.
Then, the FastText method was trained using a 30-dimensional vector embedding space. In fig:cluste16 we plot the UMAP-reduced vectors for 16 clusters that are at a maximum Euclidean distance of 0.2 (arbitrary units) from their respective cluster centroid, found through K-means clustering. We manually labelled the clusters by reading the 15 most representative tweets of each cluster, i.e. the ones closest to the corresponding cluster centroid. Some clusters were difficult to label, particularly those that clearly overlap with other clusters in the visualisation.
The distribution of tweets along the 16 topics is shown in fig:piesixteen, where news were slightly concentrated on security-related sub-topics like activities in the city, citizen actions against crime, victim stories and common crime. The within-cluster dispersion is comparable between topics, implying that clusters are equally diffused.
Regarding the case of only 10 clusters, in fig:tenclusters we plot the UMAP-reduced vectors that are most representative of each cluster, just as in fig:cluste16. We find that this number of topics is better than 16 clusters, since the identification of the topic was clearer in the case of 10 clusters when reading the 15 most representative tweets of each cluster. This holds true even for three different topics referring to traffic, as they topics can be well-differentiated (note that UMAP plots those groups close one another because they all contain traffic-related tweets). For instance in the traffic and urban planning cluster we find news such as
Bogotá's Secretaría de Movilidad talks about changes that users will find in public transportation this Monday
which is not directly related with security, crime or violence topics. On the other hand, in the traffic accidents cluster we find news such as
In Cartagena, two bus drivers left their vehicles in the middle of the highway to solve their differences by fighting against each other.
Finally, in the events in the traffic cluster, an example of a news tweet is
North highway is collapsed by a triple-crash. The air patroller recommends to take alternate routes.
These tweets exemplify the difference of the three traffic-related clusters. Moreover, by comparing fig:cluste16,fig:tenclusters, it is clear that some clusters are barely changed when increasing the number of topics from 10 to 16 because they are well-defined topics that can be interpreted easily from the human perspective.
Finally, the distribution of the tweets along the 10 different topics are shown in fig:pieten. We found in this number of topics, that the clusters correspond to more general topics, allowing a better separation of the tweets. It is noteworthy to state that the “others” cluster mixes different security problems such as aggression against animals, government infringements to provide services to people, among others.
Conclusions
In this paper we presented the application of a latent topic discovery method at a fine-grained level to segment Colombian news published through Twitter in different sub-topics regarding security, crime and violence. We were able to find interpretable groups of tweets published by news-media giant @NoticiasRCN, where each group referred to different sorts of security, crime and violence issues.
We identified clear labels that summarise the content of the tweets belonging to each topic: attacks and aggressions, traffic and urban planning, thefts, traffic accidents, social problems, events in the traffic, violence against children and women, medical negligence, murders and others. An important application of the methodology presented in this paper is to detect violent events that go unreported to the police. Furthermore, the tools that were shown configure a critical channel for monitoring violent actions that attempt against the security of women, children, minorities and crime victims. Moreover, our method allows the automatic classification of new security-related tweets.
Our method contributes to the segmentation of tweets to better address issues in each security front. What we will develop in future work is the characterisation of people's reaction to different types of security, crime and violence related issues, and to identify violent behaviour in social networks, as this is a cornerstone to understanding social and cultural dynamics in our communities. | Yes |
d2473c039ab85f8e9e99066894658381ae852e16 | d2473c039ab85f8e9e99066894658381ae852e16_0 | Q: What are the features of used to customize target user interaction?
Text: Introduction
Recent advances in visual language field enabled by deep learning techniques have succeeded in bridging the gap between vision and language in a variety of tasks, ranging from describing the image BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 to answering questions about the image BIBREF4 , BIBREF5 . Such achievements were possible under the premise that there exists a set of ground truth references that are universally applicable regardless of the target, scope, or context. In real-world setting, however, image descriptions are prone to an infinitely wide range of variabilities, as different viewers may pay attention to different aspects of the image in different contexts, resulting in a variety of descriptions or interpretations. Due to its subjective nature, such diversity is difficult to obtain with conventional image description techniques.
In this paper, we propose a customized image narrative generation task, in which we attempt to actively engage the users in the description generation process by asking questions and directly obtaining their answers, thus learning and reflecting their interest in the description. We use the term image narrative to differentiate our image description from conventional one, in which the objective is fixed as depicting factual aspects of global elements. In contrast, image narratives in our model cover a much wider range of topics, including subjective, local, or inferential elements.
We first describe a model for automatic image narrative generation from single image without user interaction. We develop a self Q&A model to take advantage of wide array of contents available in visual question answering (VQA) task, and demonstrate that our model can generate image descriptions that are richer in contents than previous models. We then apply the model to interactive environment by directly obtaining the answers to the questions from the users. Through a wide range of experiments, we demonstrate that such interaction enables us not only to customize the image description by reflecting the user's choice in the current image of interest, but also to automatically apply the learned preference to new images (Figure 1 ).
Related Works
Visual Language: The workflow of extracting image features with convolutional neural network (CNN) and generating captions with long short-term memory (LSTM) BIBREF6 has been consolidated as a standard for image captioning task. BIBREF0 generated region-level descriptions by implementing alignment model of region-level CNN and bidirectional recurrent neural network (RNN). BIBREF7 proposed DenseCap that generates multiple captions from an image at region-level. BIBREF8 built SIND dataset whose image descriptions display a more casual and natural tone, involving aspects that are not factual and visually apparent. While this work resembles the motivation of our research, it requires a sequence of images to fully construct a narrative.
Visual question answering (VQA) has escalated the interaction of language and vision to a new stage, by enabling a machine to answer a variety of questions about the image, not just describe certain aspects of the image. A number of different approaches have been proposed to tackle VQA task, but classification approach has been shown to outperform generative approach BIBREF9 , BIBREF10 . BIBREF11 proposed multimodal compact bilinear pooling to compactly combine the visual and textual features. BIBREF12 proposed an attention-based model to select a region from the image based on text query. BIBREF13 introduced co-attention model, which not only employs visual attention, but also question attention.
User Interaction: Incorporating interaction with users into the system has rapidly become a research interest. Visual Dialog BIBREF5 actively involves user interaction, which in turn affects the responses generated by the system. Its core mechanism, however, functions in an inverse direction from our model, as the users ask the questions about the image, and the system answers them. Thus, the focus is on extending the VQA system to a more context-dependent, and interactive direction. On the other hand, our model's focus is on generating customized image descriptions, and user interaction is employed to learn the user's interest, whereas Visual Dialog is not concerned about the users themselves.
BIBREF14 introduces an interactive game, in which the system attempts to localize the object that the user is paying attention to by asking relevant questions that narrow down the potential candidates, and obtaining answers from the users. This work is highly relevant to our work in that user's answers directly influence the performance of the task, but our focus is on contents generation instead of object localization or gaming. Also, our model not only utilizes user's answer for current image, but further attempts to apply it to new images. Recent works in reinforcement learning (RL) have also employed interactive environment by allowing the agents to be taught by non-expert humans BIBREF15 . However, its main purpose is to assist the training of RL agents, while our goal is to learn the user's interest specifically.
Automatic Image Narrative Generation
We first describe a model to generate image narrative that covers a wide range of topics without user interaction. We propose a self Q&A model where questions are generated from multiple regions, and VQA is applied to answer the questions, thereby generating image-relevant contents.
Region Extraction: Following BIBREF16 , we first extract region candidates from the feature map of an image, by applying linear SVM trained on annotated bounding boxes at multiple scales, and applying non-maximal suppression. The region candidates then go through inverse cascade from upper, fine layer to lower, coarser layers of CNN, in order to better-localize the detected objects. This results in region proposals that are more contents-oriented than selective search BIBREF17 or Edge Boxes BIBREF18 . We first extracted top 10 regions per image. Figure 2 shows an example of the regions extracted in this way. In the experiments to follow, we set the number of region proposals K as 5, since the region proposals beyond top 5 tended to be less congruent, thus generating less relevant questions.
Visual Question Generation: In image captioning task, it is conventional to train an LSTM with human-written captions as ground truth annotations. On the other hand, in VQA task, questions are frequently inserted to LSTM in series with fixed image features, and the answers to the questions become the ground truth labels to be classified. Instead, we replace the human-written captions with human-written questions, so that LSTM is trained to predict the question, rather than caption.
Given an image I and a question Q = (q0,...qN), the training proceeds as in BIBREF2 :
$$\begin{aligned} x_{-1} = CNN(I),x_t = W_eq_t,p_{t+1}=LSTM(x_t)\\ \end{aligned}$$ (Eq. 3)
where We is a word embedding, xt is the input features to LSTM at t, and pt+1 is the resulting probability distribution for the entire dictionary at t. In the actual generation of questions, it will be performed over all region proposals r0,...,rN $\in $ I:
$$\begin{aligned} x_{-1} = CNN(r_i), x_t = W_eq_{t-1}\\ q_{t}=\mathrm {max}_{q\in p} p_{t+1}=\mathrm {argmax} LSTM(x_t) \end{aligned}$$ (Eq. 4)
for q0,...qN $\in $ Qri. Figure 2 shows examples of questions generated from each region including the entire image. As shown in the figure, by focusing on different regions and extracting different image features, we can generate multiple image-relevant questions from single image.
So far, we were concerned with generating “visual” questions. We also seek to generate “non-visual" questions. BIBREF19 generated questions that a human may naturally ask and require common-sense and inference. We examined whether we can train a network to ask multiple questions of such type by visual cues. We replicated the image captioning process described above, with 10,000 images of MS COCO and Flickr segments of VQG dataset, with 5 questions per image as the annotations. Examples of questions generated by training the network solely with non-visual questions are shown in Table 1 .
Visual Question Answering: We now seek to answer the questions generated. We train the question answering system with VQA dataset BIBREF4 . Question words are sequentially encoded by LSTM as one-hot vector. Hyperbolic tangent non-linearity activation was employed, and element-wise multiplication was used to fuse the image and word features, from which softmax classifies the final label as the answer for visual question. We set the number of possible answers as 1,250.
As we augmented the training data with “non-visual” questions, we also need to train the network to “answer” those non-visual answers. Since BIBREF19 provides the questions only, we collected the answers to these questions on Amazon Mechanical Turk. Since many of these questions cannot be answered without specific knowledge beyond what is seen in the image (e.g. “what is the name of the dog?”), we encouraged the workers to use their imagination, but required them to come up with answers that an average person might also think of. For example, people frequently answered the question “what is the name of the man?” with “John” or “Tom.” Such non-visual elements add vividness and story-like characteristics to the narrative as long as they are compatible with the image, even if not entirely verifiable.
[table]skip=1pt
Natural Language Processing: We are now given multiple pairs of questions and answers about the image. By design of the VQA dataset, which mostly comprises simple questions regarding only one aspect with the answers mostly being single words, the grammatical structure of most questions and answers can be reduced to a manageable pool of patterns. Exploiting these design characteristics, we combine the obtained pairs of questions and answers to a declarative sentence by application of rule-based transformations, as in BIBREF20 , BIBREF21 .
We first rephrase the question to a declarative sentence by switching word positions, and then insert the answers to its appropriate position, mostly replacing wh-words. For example, a question “What is the man holding?" is first converted to a declarative statement “The man is holding what" and the corresponding answer “frisbee” replaces “what" to make “The man is holding frisbee." Part-of-speech tags with limited usage of parse tree were used to guide the process, particularly conjugation according to tense and plurality. Figure 3 illustrates the workflow of converting question and answer to a declarative sentence. See Supplemental Material for specific conversion rules. Part-of-speech tag notation is as used in PennTree I Tags BIBREF22 .
We applied the model described in Section "Automatic Image Narrative Generation" to 40,775 images in test 2014 split of MS COCO BIBREF24 . We compare our proposed model to three baselines as following:
Baseline 1 (COCO): general captioning trained on MS COCO applied to both images in their entireties and the region proposals
Baseline 2 (SIND): captions with model trained on MS SIND dataset BIBREF8 , applied to both images in their entireties and the region proposals
Baseline 3 (DenseCap): captions generated by DenseCap BIBREF7 at both the whole images and regions with top 5 scores using their own region extraction implementation.
Automatic Evaluation: It is naturally of our interest how humans would actually write image narratives. Not only can we perform automatic evaluation for reference, but we can also have a comprehension of what characteristics would be shown in actual human-written image narratives. We collected image narratives for a subset of MS COCO dataset . We asked the workers to write a 5-sentence narrative about the image in a story-like way. We made it clear that the description can involve not only factual description of the main event, but also local elements, sentiments, inference, imagination, etc., provided that it can relate to the visual elements shown in the image. Table 2 shows examples of actual human-written image narratives collected and they display a number of intriguing remarks. On top of the elements and styles we asked for, the participants actively employed many other elements encompassing humor, question, suggestion, etc. in a highly creative way. It is also clear that conventional captioning alone will not be able to capture or mimic the semantic diversity present in them.
We performed automatic evaluation with BLEU BIBREF25 with collected image narratives as ground truth annotations. Table 3 shows the results. While resemblance to human-written image narratives may not necessarily guarantee better qualities, our model, along with DenseCap, showed highest resemblance to human-written image narratives. As we will see in human evaluation, such tendency turns out to be consistent, suggesting that resemblance to human-written image narratives may indeed provide a meaningful reference.
Human Evaluation: We asked the workers to rate each model's narrative with 5 metrics that we find essential in evaluating narratives; Diversity, Interestingness, Accuracy, Naturalness, and Expressivity (DIANE). Evaluation was performed for 5,000 images with 2 workers per image, and all metrics were rated in the scale of 1 to 5 with 5 being the best performance in each metric. We asked each worker to rate all 4 models for the image on all metrics.
Table 6 shows example narratives from each model. Table 4 shows the performance of each model on the evaluation metrics, along with the percentage of each model receiving the highest score for a given image, including par with other models. Our model obtained the highest score on Diversity, Interestingness and Expressivity, along with the highest overall score and the highest percentage of receiving best scores. In all other metrics, our model was the second highest, closely trailing the models with highest scores. Table 5 shows our model's performance against each baseline model, in terms of the counts of wins, losses, and pars. ${\chi }^2$ values on 2 degrees of freedom are evaluated against the null hypothesis that all models are equally preferred. The rightmost column in Table 5 corresponds to the one-sided p-values obtained from binomial probability against the same null hypothesis. Both significance tests provide an evidence that our model is clearly preferred over others. Discussion: General image captioning trained on MS COCO shows weaknesses in accuracy and expressivity. Lower score in accuracy is presumably due to quick diversion from the image contents as it generates captions directly from regions. Since it is restricted by an objective of describing the entire image, it frequently generates irrelevant description on images whose characteristics differ from typical COCO images, such as regions within an image as in our case. Story-like captioning trained on MS SIND obtained the lowest scores in all metrics. In fact, examples in Table 6 display that the narratives from this model are almost completely irrelevant to the corresponding images, since the correlation between single particular image and assigned caption is very low. DenseCap turns out to be the most competitive among the baseline models. It demonstrates the highest accuracy among all models, but shows weaknesses in interestingness and expressivity, due to their invariant tone and design objective of factual description. Our model, highly ranked in all metrics, demonstrates superiority in many indispensable aspects of narrative, while not sacrificing the descriptive accuracy.
Interactive Image Narrative Generation
We now extend the automatic image narrative generation model described in Section "Automatic Image Narrative Generation" to interactive environment, in which users participate in the process by answering questions about the image, so that generated narrative varies depending on the user input provided.
We first need to obtain data that reflect personal tendencies of different users. Thus, we not only need to collect data from multiple users so that individual differences exist, but also to collect multiple responses from each user so that individual tendency of each user can be learned.
We generated 10,000 questions that allow for multiple responses following the procedure described in Section "Interactive Image Narrative Generation" . We grouped every 10 questions into one task, and allowed 3 workers per task so that up to 3,000 workers can participate. Since multiple people are participating for the same group of images, we end up obtaining different sets of responses that reflect each individual's tendency.
We have permutation of 10 choose 2, $P(10,2)=90$ pairs of triplets for each user, adding up to 270,000 pairs of training data. Note that we are assuming a source-to-target relation within the pair, so the order within the pair does matter. We randomly split these data into 250,000 and 20,000 for training and validation splits, and performed 5-fold validation with training procedure described in Section "Interactive Image Narrative Generation" . With 705 labels as possible choices, we had an average of 68.72 accuracy in predicting the choice on new image, given the previous choice by the same user. Randomly matching the pairs with choices from different users seemingly drops the average score down to 45.17, confirming that the consistency in user choices is a key point in learning preference.
Question Generation: For question generation, our interest is whether our model can generate questions that allow for various responses, rather than single fixed response. We asked the workers on Amazon Mechanical Turk to decide whether the question can be answered in various ways or has multiple answers, given an image. 1,000 questions were generated with our proposed model using both VQG and VQA, and another 1,000 questions were generated using VQG only.
Table 7 shows the number of votes for each model. It is very clear that the questions generated from our proposed model of parallel VQG and VQA outperformed by far the questions generated from VQG only. This is inevitable in a sense that VQG module was trained with human-written questions that were intended to train the VQA module, i.e. with questions that mostly have clear answers. On the other hand, our model deliberately chose the questions from VQG that have evenly distributed probabilities for answer labels, thus permitting multiple possible responses. Table 8 shows examples of visual questions generated from our model and VQG only respectively. In questions generated from our model, different responses are possible, whereas the questions generated from VQG only are restricted to single obvious answer.
Reflection of User's Choice on the Same Image: Our next experiment is on the user-dependent image narrative generation. We presented the workers with 3,000 images and associated questions, with 3 possible choices as a response to each question. Each worker freely chooses one of the choices, and is asked to rate the image narrative that corresponds to the answer they chose, considering how well it reflects their answer choices. As a baseline model, we examined a model where the question is absent in the learning and representation, so that only the image and the user input are provided. Rating was performed over scale of 1 to 5, with 5 indicating highly reflective of their choice. Table 11 shows the result. Agreement score among the workers was calculated based on BIBREF26 . Agreement score for our model falls into the range of `moderate' agreement, whereas, for baseline model, it is at the lower range of `fair' agreement, as defined by BIBREF27 , demonstrating that the users more frequently agreed upon the reliability of the image narratives for our model. Our model clearly has an advantage over using image features only with a margin considerably over standard deviation. Table 9 shows examples of images, generated question, and image narratives generated depending on the choice made for the question respectively.
Reflection of User's Choice on New Images: Finally, we experiment with applying user's interest to new images. As in the previous experiment, each worker is presented with an image and a question, with 3 possible choices as an answer to the question. After they choose an answer, they are presented with a new image and a new image narrative. Their task is to determine whether the newly presented image narrative reflects their choice and interest. As a baseline, we again examined a model where the question is absent in the learning and representation stages. In addition, we performed an experiment in which we trained preference learning module with randomly matched choices. This allows us to examine whether there exists a consistency in user choices that enables us to apply the learned preferences to new image narratives.
Table 12 shows the result. As in previous experiment, our model clearly has an advantage over using image features only. Inter-rater agreement score is also more stable for our model. Training preference learning module with randomly matched pairs of choices resulted in a score below our proposed model, but above using the image features only. This may imply that, even with randomly matched pairs, it is better to train with actual choices made by the users with regards to specific questions, rather than with conspicuous objects only. Overall, the result confirms that it is highly important to provide a context, in our case by generating visual questions, for the system to learn and reflect the user's specific preferences. It also shows that it is important to train with consistent choices made by identical users. Table 10 shows examples of image narratives generated for new images, depending on the choice the users made for the original image, given the respective questions.
Applying Interaction within the Same Images
As discussed earlier, we attempt to reflect user's interest by asking questions that provide visual context. The foremost prerequisite for the interactive questions to perform that function is the possibility of various answers or interpretations. In other words, a question whose answer is so obvious that it can be answered in an identical way would not be valid as an interactive question. In order to make sure that each generated question allows for multiple possible answers, we internally utilize the VQA module. The question generated by the VQG module is passed on to VQA module, where the probability distribution $p_{ans}$ for all candidate answers $C$ is determined. If the most likely candidate $c_i=\max p_{ans}$ , where $c_i \in C$ , has a probability of being answer over a certain threshold $\alpha $ , then the question is considered to have a single obvious answer, and is thus considered ineligible. The next question generated by VQG is passed on to VQA to repeat the same process until the the following requirement is met:
$$\begin{aligned} c_i<\alpha , c_i= \max p_{ans} \\ \end{aligned}$$ (Eq. 10)
In our experiments, we set $\alpha $ as 0.33. We also excluded the yes/no type of questions. Figure 4 illustrates an example of a question where the most likely answer had a probability distribution over the threshold (and is thus ineligible), and another question whose probability distribution over the candidate answers was more evenly distributed (and thus proceeds to narrative generation stage).
Once the visual question that allows for multiple responses is generated, a user inputs his answer to the question, which is assumed to reflect his interest. We then need to extract a region within the image that corresponds to the user's response. We slightly modify the attention networks introduced in BIBREF23 in order to obtain the coordinates of the region that correspond to the user response. In BIBREF23 , the question itself was fed into the network, so that the region necessary to answer that question is “attended to.” On the other hand, we are already given the answer to the question by the user. We take advantage of this by making simple yet efficient modification, in which we replace the wh- question terms with the response provided by the user. For example, a question “what is on the table?” with a user response “pizza” will be converted to a phrase “pizza is on the table,” which is fed into attention network. This is similar to the rule-based NLP conversion in Section "Automatic Image Narrative Generation" . We obtain the coordinates of the region from the second attention layer, by obtaining minimum and maximum values for x-axis and y-axis in which the attention layer reacts to the input phrase. Since the regions are likely to contain the objects of interest at very tight scale, we extracted the regions at slightly larger sizes than coordinates. A region $r_i$ of size ( $w_{r_i},h_{r_i}$ ) with coordinates $x_{0_i},y_{0_i},x_{max_i},y_{max_i}$ for image I of size $(W,H)$ is extracted with a magnifying factor $\alpha $ (set as 0.25):
$$\begin{aligned} r^{\prime }_i=(\max (0,x_{0_i}-w_{r_i}\alpha ),\max (0,y_{0_i}-h_{r_i}\alpha ),\\ \min (W,x_{max_i}+w_{r_i}\alpha ),\min (H,y_{max_i}+h_{r_i}\alpha ))\\ \end{aligned}$$ (Eq. 12)
Given the region and its features, we can now apply the image narrative generation process described in Section "Automatic Image Narrative Generation" with minor modifications in setting. Regions are further extracted, visual questions are generated and answered, and rule-based natural language processing techniques are applied to organize them. Figure 4 shows an overall workflow of our model.
Applying Interaction to New Images
We represent each instance of image, question, and user choice as a triplet consisting of image feature, question feature, and the label vector for the user's answer. In addition, collecting multiple choices from identical users enables us to represent any two instances by the same user as a pair of triplets, assuming source-target relation. With these pairs of triplets, we can train the system to predict a user's choice on a new image and a new question, given the same user's choice on the previous image and its associated question. User's choice $x_{ans_i}$ is represented as one-hot vector where the size of the vector is equal to the number of possible choices. We refer to the fused feature representation of this triplet consisting of image, question, and the user's choice as choice vector.
We now project the image feature $x_{img_j}$ and question feature $x_{q_j}$ for the second triplet onto the same embedding space as the choice vector. We can now train a softmax classification task in which the feature from the common embedding space predicts the user's choice $x_{ans_j}$ on new question. In short, we postulate that the answer with index $u$ , which maximizes the probability calculated by LSTM, is to be chosen as $x_{ans_l}$ by the user who chose $x_{ans_k}$ , upon seeing a tuple $(x_{img_l},x_{q_l})$ of new image and new question:
$$\begin{aligned} u=\arg \max _v P(v;c_k,x_{img_l},x_{q_l}) \end{aligned}$$ (Eq. 15)
where P is a probability distribution determined by softmax over the space of possible choices, and $c_k$ is the choice vector corresponding to $(x_{img_k},x_{q_k},x_{ans_k})$ . This overall procedure and structure are essentially identical as in VQA task, except we augment the feature space to include choice vector. Figure 5 shows the overall workflow for training.
Conclusion
We proposed a customized image narrative generation task, where we proposed a model to engage the users in image description generation task, by directly asking questions to the users, and collecting answers. Experimental results demonstrate that our model can successfully diversify the image description by reflecting the user's choice, and that user's interest learned can be further applied to new images.
Acknowledgments
This work was partially funded by the ImPACT Program of the Council for Science, Technology, and Innovation (Cabinet Office, Government of Japan), and was partially supported by CREST, JST.
Why generate quesetions?
A question may arise as to why not to simply ask the users to select the region or part of the image that stands out the most to them. In such case, there would be no need to generate the questions for each image, as the question `what stands out the most?' would suffice for all images. This, however, would be equivalent to a simple saliency annotation task, and would not allow for any meaningful customization or optimization per user. Thus, as discussed above, generating a question for each image is intended to provide a context in which each user can apply their own specific interest. Figure 6 shows how providing context via questions can diversify people's attention. Apart from simply generating diverse image narratives based on the user input, many potential applications can be conceived of. For example, in cases where thorough description of an entire scene results in a redundant amount of information both quality and quantity-wise, application of our model can be applied to describe just the aspect that meets the user's interest that was learned.
[table]skip=1pt
Clarification of DIANE
Few works tackled the task of narrative evaluation, hardly taking visual information into consideration. Although we could not find an authoritative work on the topic of narrative evaluation, this was our best attempt at not only reflecting precision/recall, but various aspects contributing to the integrity of the image narrative. Diversity deals with the coverage of diction and contents in the narrative, roughly corresponding to recall. Interestingness measures the extent to which the contents of the narrative grasp the user's attention. Accuracy measures the degree to which the description is relevant to the image, corresponding to precision. Contents that are not visually verifiable are considered accurate only if they are compatible with salient parts of the image. Naturalness refers to the narrative's overall resemblance to human-written text or human-spoken dialogue. Expressivity deals with the range of syntax and tones in the narrative.
Additional Experiments
We also performed an experiment in which we generate image narratives by following conventional image captioning procedure with human-written image narratives collected on Amazon Mechanical Turk. In other words, we trained LSTM with CNN features of images and human-written image narratives as ground truth captions. If such setting turns out to be successful, our model would not have much comparative merit.
We trained an LSTM with collected image-narratives for training split of MS COCO. We retained the experimental conditions identically as previous experiments, and trained for 50 epochs. Table 19 shows example narratives generated. Not only does it utterly fail to learn the structure of image narratives, but it hardly generates text over one sentence, and even so, its descriptive accuracy is very poor. Since LSTM now has to adjust its memory cells' dependency on much longer text, it struggles to even form a complete sentence, not to mention inaccurate description. This tells us that simply training with human-written image narratives does not result in reliable outcomes.
With reference human-written image narratives, we further performed CIDEr BIBREF29 evaluation as shown in Table 25 .
Discussion
It was shown via the experiments above that there exists a certain consistency over the choices made by the same user, and that it is thus beneficial to train with the choices made by the same users. Yet, we also need to investigate whether such consistency exists across different categories of images. We ran Fast-RCNN BIBREF28 on the images used in our experiment, and assigned the classes with probability over 0.7 as the labels for each image. We then define any two images to be in the same category if any of the assigned labels overlaps. Of 3,000 pairs of images used in the experiment, 952 pairs had images with at least one label overlapping. Our proposed model had average human evaluation score of 4.35 for pairs with overlapping labels and 2.98 for pairs without overlapping labels. Baseline model with image features only had 2.57 for pairs with overlapping labels and 2.10 for pairs without overlapping labels. Thus, it is shown that a large portion of the superior performance of our model comes from the user's consistency for the images of the same category, which is an intuitively correct conclusion.
However, our model also has superiority over baseline model for pairs without overlapping labels. This may seem more difficult to explain intuitively, as it is hard to see any explicit correlation between, for example, a car and an apple, other than saying that it is somebody's preference. We manually examined a set of such examples, and frequently found a pattern in which the color of the objects of choices was identical; for example, a red car and an apple. It is difficult to attribute it to a specific cause, but it is likely that there exists some degree of consistency in user choices over different categories, although to a lesser extent than for images in the same category. Also, it is once again confirmed that it is better to train with actual user choices made on specific questions, rather than simply with most conspicuous objects.
Additional Figures & Tables
Table 13 shows the contrast between semantic diversity of captions and questions. Figure 7 shows overall architecture each of image captioning, visual question answering, and visual question generation task. Table 14 shows statistics for crowd-sourcing task on collecting answers to non-visual questions in VQG dataset. Table 15 shows examples of answers to VQG questions collected on crowd-sourcing. Table 1 shows examples of generated questions using VQG dataset. Table 17 shows examples of human-written image narratives. Table 18 shows statistics for human-written image narratives collection. Table 21 shows conversion rules for natural language processing stage for narrative generation process as used in Section 3. Table 22 to Table 24 show more examples of image narratives. Table 8 shows examples of questions for user interaction that were generated using our proposed model of combining VQG and VQA, and the baseline of using VQG only. Table 9 shows another example of customized image narratives generated depending on the choices made by user upon the question. Table 10 shows examples of how the choices made by user upon the question were reflected in new images.
Additional Clarifications
Why were yes/no questions excluded? Yes/no questions are less likely to induce multiple answers. The number of possible choices is limited to 2 in most cases, and rarely correspond well to particular regions.
Failure cases for rule-based conversion: Since both questions and answers are human-written, our conversion rule frequently fails with typos, abridgments, words with multiple POS tags, and grammatically incorrect questions. We either manually modified them or left them as they are.
Experiments with different VQA models. Most of well-known VQA models' performances are currently in a relatively tight range. In fact, we tried BIBREF11 , SOTA at the time of experiment, but did not see any noticeable improvement.
Is attention network retrained to handle sentences? No, but we found that attention network trained for questions works surprisingly well for sentences, which makes sense since key words that provide attention-wise clue are likely limited, and hardly inquisitive words.
Why not train with “I don’t know?” We were concerned that answers like “I don't know" would likely overfit. It would also undermine creative aspect of image narrative, without adding much to functional aspect. | image feature, question feature, label vector for the user's answer |
5d6cc65b73f428ea2a499bcf91995ef5441f63d4 | 5d6cc65b73f428ea2a499bcf91995ef5441f63d4_0 | Q: How they evaluate quality of generated output?
Text: Introduction
The growing interest in Machine Reading Comprehension (MRC) has sparked significant research efforts on Question Generation (QG), the dual task to Question Answering (QA). In QA, the objective is to produce an adequate response given a query and a text; conversely, for QG, the task is generally defined as generating relevant question given a source text, focusing on a specific answer span. To our knowledge, all works tackling QG have thus far focused exclusively on generating relevant questions which can be answered given the source text: for instance, given AAAI was founded in 1979 as input, a question likely to be automatically generated would be When was AAAI founded?, where the answer 1979 is a span of the input. Such questions are useful to evaluate reading comprehension for both machines BIBREF0, BIBREF1 and humans BIBREF2.
However, the human ability of asking questions goes well beyond evaluation: asking questions is essential in education BIBREF3 and has been proven to be fundamental for children cognitive development BIBREF4. Curiosity is baked into the human experience. It allows to extend one's comprehension and knowledge by asking questions that, while being relevant to context, are not directly answerable by it, thus being inquisitive and curious. The significance of such kind of questions is two-fold: first, they allow for gathering novel relevant information, e.g. a student asking for clarification; second, they are also tightly linked to one's understanding of the context, e.g. a teacher testing a student's knowledge by asking questions whose answers require a deeper understanding of the context and more complex reasoning.
From an applicative point of view, we deem the ability to generate curious, inquisitive, questions as highly beneficial for a broad range of scenarios: i) in the context of human-machine interaction (e.g. robots, chat-bots, educational tools), where the communication with the users could be more natural; ii) during the learning process itself, which could be partially driven in a self-supervised manner, reminiscent of how humans learn by exploring and interacting with their environment.
To our knowledge, this is the first paper attempting to tackle Curiosity-driven neural question generation. The contributions of this paper can be summarized as follow:
we propose a new natural language generation task: curiosity-driven question generation;
we propose a method to derive data for the task from popular non-conversational QA datasets;
we experiment using language model pre-training and reinforcement learning, on two different datasets;
we report a human evaluation analysis to assess both the pertinence of the automatic metrics used and the efficacy of the proposed dataset-creation method above.
Related Works
Deep learning models have been widely applied to text generation tasks such as machine translation BIBREF5, abstractive summarization BIBREF6 or dialog BIBREF7, providing significant gains in performance. The state of the art approaches are based on sequence to sequence models BIBREF8, BIBREF9. In recent years, significant research efforts have been directed to the tasks of Machine Reading Comprehension (MRC) and Question Answering (QA) BIBREF0, BIBREF10. The data used for tackling these tasks are usually composed of $\lbrace context, question, answer\rbrace $ triplets: given a context and the question, a model is trained to predict the answer.
Conversely, the Question Generation (QG) task introduced by BIBREF11, BIBREF12 can be considered as the dual task for QA BIBREF13: thus, given a context and (optionally) an answer, the model is trained to generate the question. Following QA, research on QG BIBREF14 has also seen increasing interest from the community. One of the main motivations is that an effective QG model can be used to generate synthetic data in order to augment existing QA datasets BIBREF15, BIBREF16. For instance, BIBREF15 proposed a reinforcement learning setup trained using a QA-based metric reward: given a paragraph and an answer, the model first generates questions; then, the paragraph and the corresponding generated questions are given to a pre-trained QA model which predicts an answer; finally, the reward is computed as the number of overlapping words between the ground truth answer and the predicted answer. For an extensive evalution of models trained with different rewards we refer the reader to BIBREF17. Most of these works followed BIBREF18, who applied reinforcement to neural machine translation. First, a sequence to sequence model is trained under teacher forcing BIBREF19 to optimize cross-entropy, hence helping to reduce the action space (i.e. the vocabulary size). Then, the model is finetuned with a mix of teacher forcing and REINFORCE BIBREF20.
For automatic evaluation, all previous works on QG resort to BLEU metrics BIBREF21, originally developed and widely used in Machine Translation. However, how to evaluate text generation models remains an open research question: BIBREF22 pointed out that, on QG tasks, the correlation between BLEU and human evaluation was poor.
A thorough investigation of the behavior of open-domain conversational agents has been recently presented by BIBREF23. Using controllable neural text generation methods, the authors control important attributes for chit-chat dialogues, including question-asking behavior. Among the take-away messages of this work, is that question-asking represents an essential component in an engaging chit-chat pipeline: the authors find, via a large-scale human validation study, that agents with higher rates of question-asking obtain qualitative improvements in terms of inquisitiveness, interestingness and engagingness.
Indeed, in a conversational setting, it can be expected that the nature of follow-up questions significantly differs from those used as target in a traditional QG training setup: as mentioned earlier, QG has so far been tackled as the dual task to QA, hence training models to generate questions whose answer is present in the input context. On the contrary, we argue that in natural conversations the questions follow the input context but are rather a mean to augment one's knowledge (thus, their answer is not present in the input context). In this work, we thus define the task as Curiosity-driven Question Generation.
Dataset
Question Answering datasets are usually composed of a set of questions associated with the corresponding answers and the reading passages (the context) containing the answer. The QA task is defined as finding the answer to a question given the context. As opposed, the Question Generation (QG) task is to generate the question given the input and (optionally) the answer. Most previous efforts on the QG task have resorted to the widely used Stanford Question Answering Dataset (SQuAD) BIBREF10. It contains roughly 100,000 questions posed by crowd-workers on selected sample of Wikipedia articles. Several other QA datasets have also been recently published accounting for characteristic such as requiring multi-passage or discrete reasoning BIBREF24, BIBREF25; further, conversational QA datasets have been made available: CoQA BIBREF26 and QuAC BIBREF27 have the desirable property to be in a dialogue-like setting.
In our scenario, Curiosity-driven QG, the reading passage associated with a question should not contain the answer, but rather pave the way for asking a new question – whose answer would eventually enrich the knowledge on the matter at hand. Therefore, a natural choice to build QG data would be to rely on existing datasets for conversational QA. A detailed comparison of the above-mentioned CoQA and QuAC datasets is provided by BIBREF28, who reports the proportion of Topic Error (questions unlikely to be asked in the context) and Entity Salad (i.e. questions unanswerable for any context): CoQA includes a significantly higher proportion Topic Error and Entity Salad compared to QuAC. For this reason, we resort to QuAC in order to derive data Curiosity-driven QG.
Furthermore, recognizing the fact that the great majority of QA datasets available does not account for conversational characteristics, we propose a methodology to derive data for Curiosity-driven Question Generation from standard QA datasets, applying it to the popular SQuAD BIBREF10.
For both our data sources, and consistently with standard QA and QG tasks, we encode each sample as a triplet $\lbrace P, q, a\rbrace $ where the paragraph $P$ comprises $n$ sentences $[s_0,..., s_n]$, and $a$ represents the answer to the question $q$. A canonical QG approach would thus use $s_a$, i.e. the sentence of $P$ that contains the answer, as source, and $q$ as generation target. On the contrary, for Curiosity-driven QG, any sentence $s_x$ from $P$ can potentially be used as the source sequence, as long as it does not contain the answer – i.e. under the necessary constraint of $x \ne a$. In the following subsections, we elaborate on additional constraints depending on the nature of the source data.
In general, we define samples as triplets
where $s_x$ and $P^{\prime }$ are, respectively, the input sentence and the paragraph $P$ modified according to the appropriate dataset-depending constraint, and $y$ is the reference (target) question.
Dataset ::: Conversational QA Data
As mentioned above, we first derive our data from the QuAC dataset, which is built from Wikipedia articles by iterating over the following procedure: given a sentence, a student annotator asks a relevant question for which he does not have the answer; then, the teacher – annotator – retrieves a sentence that contains the answer. Thus, a QuAC question is curious by design, given the text that precedes it. More formally, for the question $q$ (i.e. our target), the source $s_x$ is composed by the concatenation of the sentences of $P$ which appear before the sentence $s_a$ that contains the answer. Therefore, our QuAC-derived dataset is built by applying the stricter constraint $x < a$.
Numerically, the QuAC dataset compounds to 83,568 questions (on 11,567 articles) for the train set, 7,354 for the validation set and 7,353 for the test set (1,000 articles each). Since the test set is not public, we use the original QuAC validation set to build our test set. From the training set, we randomly drop 1,000 articles (hence, 7,224 samples) which we use to derive our validation set, thus resulting in 76,345 questions for training.
Dataset ::: Standard QA Data
Most of the available QA datasets are not conversational. Thus, we propose a simple method to obtain data for Curiosity-driven QG from standard QA datasets. For this, we use the widely popular SQuADBIBREF10, and specifically the original splits released by BIBREF11 which is commonly used for Question Generation.
As opposed to QuAC, the questions in SQuAD do not follow logical ordering. Therefore, any sentence $s_x$ from $P$ can potentially be used as the source sequence, as long as it does not contain the answer $a$ (constraint: $x \ne a$). Nonetheless, as is reasonable for factoid QA datasets, several questions are so specific to their associated sentence $s_a$ that they would be extremely unlikely to be asked without knowing the contents of $s_a$ itself.
To exemplify this issue, take the following paragraph from SQuAD:
Tesla was the fourth of five children. He had an older brother named Dane and three sisters, Milka, Angelina and Marica. Dane was killed in a horse-riding accident when Nikola was five. In 1861, Tesla attended the “Lower" or “Primary" School in Smiljan where he studied German, arithmetic, and religion. In 1862, the Tesla family moved to Gospić, Austrian Empire, where Tesla's father worked as a pastor. Nikola completed “Lower" or “Primary" School, followed by the “Lower Real Gymnasium" or “Normal School.
Given “Dane was killed in a horse-riding accident when Nikola was five." as $s_a$, and operating under the sole constraint of $x \ne a$, the sentence “Tesla was the fourth of five children" would be eligible as a source $s_x$ for the target question “What happened to Dane?". This question can only be asked if either contextual information or background knowledge is available, since it requires to know that Dane was among Tesla's four siblings.
To overcome this problem, we added an additional constraint based on Named Entity Recognition (NER): $s_x$ is an acceptable input only if all the entities present in the question $q$ are also present in the input sentence $s_x$. In the previous example, this would thus filter out the target “What happened to Dane?" while allowing for “What was Tesla's brother's name?".
For our experiments we used spaCy.
In Table TABREF10 we report the number of samples we obtained from SQuAD before and after applying NER filtering. After applying the above methodology to construct a dataset for Curiosity-driven QG, our training dataset contains 25,356 samples for training, 2,076 for development, and 2,087 for testing.
Metrics
Automatic evaluation of Natural Language Generation (NLG) systems is a challenging task BIBREF22. For QG, $n$-gram based similarity metrics are commonly used. These measures evaluate how similar the generated text is to the corresponding reference(s). While they are known to suffer from several shortcomings BIBREF29, BIBREF30, they allow to evaluate specific properties of the developed models. In this work, the metrics detailed below are proposed and we evaluate their quality through a human evaluation in subsection SECREF32.
Metrics ::: BLEU
One of the most popular metrics for QG, BLEU BIBREF21 provides a set of measures to compare automatically generated texts against one or more references. In particular, BLEU-N is based on the count of overlapping n-grams between the candidate and its corresponding reference(s).
Metrics ::: Self-BLEU
Within the field of Computational Creativity, Diversity is considered a desirable property BIBREF31. Indeed, generating always the same question such as “What is the meaning of the universe?" would be an undesirable behavior, reminiscent of the “collapse mode" observed in Generative Adversarial Networks (GAN) BIBREF32. Therefore, we adopt Self-BLEU, originally proposed by BIBREF33, as a measure of diversity for the generated text sequences. Self-BLEU is computed as follows: for each generated sentence $s_i$, a BLEU score is computed using $s_i$ as hypothesis while the other generated sentences are used as reference. When averaged over all the references, it thus provides a measure of how diverse the sentences are. Lower Self-BLEU scores indicate more diversity. We refer to these metrics as Self-B* throughout this paper.
Metrics ::: QA-based metrics
Given a text, a question can be considered curious if the answer is not contained in the input text. In our task, this implies that a question $q$ should not be answerable given its corresponding input sentence $s_x$. Thanks to the recent improvements obtained on Question Answering tasks – for instance, human-level performance has been achieved on SQuAD-v1 – the answerability of a question can be automatically measured.
Therefore, given a question-context pair as input to a QA model, two type of metrics can be computed:
n-gram based score: measuring the average overlap between the retrieved answer and the ground truth.
probability score: the confidence of the QA model for its retrieved answer; this corresponds to the probability of being the correct answer assigned by the QA model to the retrieved answer.
Since several diverse questions can be generated for a given input, we consider the latter metric (probability score) to better fit the Curiosity-driven QG task.
Hence, given the evaluated question $q$ and the input text $s_x$, we define a metric QA_prob as the confidence of the QA model that its predicted answer is correct. This metric measures answerability of $q$ given $s_x$: therefore, the lower this score, the less likely the answer is contained in the input text.
While being non-answerable represents a necessary condition for $q$ being a curious question with respect to its context $s_x$, we also want $q$ to be as relevant and useful as possible. To this end, we compute the above QA_prob for question $q$ on $P^{\prime }$, which represents the source paragraph stripped from the sentence containing the answer (see Eq. DISPLAY_FORM6). The higher this score, the more likely the question is relevant and useful to augment the knowledge provided by $s_x$.
Thus, the two proposed metrics are defined as
and
Under our definition, Curiosity-driven questions are those that minimize $QA_{source}$ while maximizing $QA_{context}$. To compute these QA-based metrics, we use the HuggingFace implementation of BERT BIBREF34.
Experiments ::: Baseline model
As baseline architecture we adopt the popular Transformer BIBREF35, which proved to perform well on a wide range of text generation tasks. Among these, neural machine translation BIBREF36, automatic summarization BIBREF37, and question generation BIBREF38, BIBREF39. It can be briefly described as a sequence-to-sequence model with a symmetric encoder and decoder based on a self-attention mechanism, which allows to overcome the inherent obstacles to parallelism present in recurrent models such as Long Short Time Memory (LSTM) networks BIBREF40.
The copy mechanism BIBREF41 proved beneficial for QG BIBREF42, BIBREF39: indeed, the QG task is very sensitive to rare and out of vocabulary words such as named entities and such a mechanism help deal with it efficiently: more than 50% of the answers in the SQuAD dataset, for instance, correspond to named entities (see Table 2 in BIBREF10. Hence, following BIBREF37, BIBREF39, we include a copy mechanism in our Transformer architecture.
For our experiments, we used the following hyper-parameters for the transformer: N = 2 (number of blocks); d_model = 256 (hidden state dimension); d_ff = 512 (position-wise feed-forward networks dimension); and, h = 2 (number of attention heads).
Experiments run with the original hyper-parameters as proposed by BIBREF35 obtained consistent and numerically similar results. During training, we used mini batches of size 64 and the Adam optimizer BIBREF43. At generation time, the decoding steps are computed trough the beam search algorithm with $k=5$ beams by default.
Experiments ::: Reinforcement
Reinforcement Learning (RL) is an efficient technique to maximize discrete metrics for text generation. Previously, BIBREF18 used the REINFORCE algorithm BIBREF20 to train RNNs for several generation tasks, showing improvements over previous supervised approaches. Moreover, BIBREF29 combined supervised and reinforcement learning, demonstrating improvements over competing approaches both in terms of ROUGE and on human evaluation.
However, the metrics used as reward are often overfit, leading to numerical improvements which do not translate to increased – and, rather, contribute to degrading – output quality, thus leading to reduced effectiveness of the trained models for practical applications. On this matter, and with a particular focus on QG, BIBREF17 performed a human evaluation on RL models trained with several metrics as reward, finding them to be indeed poorly aligned with human judgments: the models appear to learn to exploit the weaknesses of the reward source.
To overcome this issue, we propose to use a balanced reward:
thus maximizing the probability of finding an answer to the generated question within the input paragraph but not inside the source sentence.
In our experiments, we follow the approach proposed by BIBREF18, BIBREF29, considering a mixed loss $L_{ml+rl}$ which combines supervised and reinforcement learning schemes:
where the maximum likelihood $L_{ml}$ is defined as
where $X=[x_1,...,x_n]$ represents the source text of length $n$ and $Y=[y_1,...,y_m]$ the corresponding reference question of length $m$.
Conversely, we define the reinforcement loss $L_{rl}$ to be minimized according to the standard RL actor-critic scheme, where $r(q, P, P^{\prime })$ is the reward function defined in DISPLAY_FORM23:
Greedy decoding according to the conditional distribution $p(y|X)$ is used to obtain a sequence $\widehat{Y}$. The model is sampled using its Markov property, that is, one token at a time, giving rise to the sequence $Y^s$.
Experiments ::: Pretraining (PT)
As shown in Table TABREF10, the constrained dataset amounts to roughly three times less samples than both QuAC and the original SQuAD dataset it derives from. We thus investigate, for this dataset, the effect of pretraining the model under the traditional (i.e. not Curiosity-driven) QG training setup, using the training set as provided by BIBREF11). Then we resume training on the final dataset obtained after applying the NER-based constraint for Curiosity-driven QG on the same training samples.
For the QuAC Curiosity-driven dataset, the amount of data is comparable to the original dataset, given the conversational nature of QuAC. Therefore, we do not use pretraining for the experiments on QuAC.
Results ::: Automatic metrics
In Table TABREF29 we report the results of our experiments on QuAC for the baseline model (base) and the RL model. We use a beam $k$, and compute the results for $k=[1,3,5]$. In addition the generated questions with a beam $k=5$, we also computed the results for $k=1$ and $k=3$. While one would expect to see for all the metrics a slight improvement, with increasing beam size, we observe a strong divergence among the results: increasing values for $k$ correspond to a significant improvements in terms of BLEU-4 and notable drops for BLEU-1. A similar phenomena was observed by BIBREF44 in the context of machine translation: in this work, the presence of 1 or 2% of noisy data is found to be enough to significantly degrade the beam search results. In our case, one of most frequent generated question is Are there any other interesting aspects about this article ?. Indeed, the frequency of this question in our training set amounts to 4.18% of the questions. On the test set we see that roughly 80% of the generated questions start with the token “are" . Generating this sequence is not very likely with a greedy search ($k=1$): at any time step during the generation, if any other token has a higher probability, this question will be dismissed. On the other hand, with a higher beam, it is likely to be kept and eventually result as the most probable sequence, among the different remaining beams at the end of the inference.
Moving to our SQuAD-based experiments, we observe that the models trained on SQuAD do not seem to suffer from this issue since all the metrics improved when increasing the beam size from $k=1$ to $k=5$. This is consistent with the results reported by BIBREF42 where improving the beam improve slightly all the metrics. Thus, we only report the results with $k=5$ in Table TABREF30. A possible explanation is that SQuAD, as opposed to QuAC, only contains factoid questions.
We observe that the models trained with RL obtain, as could be expected, higher scores for QAcontext with respect to those trained without RL. A higher QAcontext implies that the QA model is more likely to find an answer in the near context of the source. QAsource is lower, as expected, for SQuAD based models, though comparatively higher than the models trained with RL on QuAC. We identify two possible reasons for this: first, the QA model is trained on answerable questions; second, the nature of the QUaC questions is less factoid than the SQuAD ones, and non-factoid questions can arguably be harder for the QA model to evaluate. This could explain why, in the RL setting, QAcontext (the evaluation on answerable questions) is higher for both SQuAD and QUaC models, but only SQuAD models achieve a lower QA_source (the evaluation on non answerable questions).
Furthermore, we see that pretraining allows to achieve higher BLEU scores, at the cost of lower Self-BLEU, thus showing an increased accuracy but less diversity in the generated questions. Indeed, we find that pretrained models tend to generate a higher number of questions starting with “What” compared to both other models and the references; the distribution for the first words of the human questions appears closer to that non pretrained models.
In Figure FIGREF31 we report the distribution of the first word frequency for the different models trained: the models without pretraining appear closer to the human-quality samples and also show more diversity.
Results ::: Human Evaluation
In addition to the automatic metrics, we proceeded to a human evaluation. We chose to use the data from our SQuAD-based experiments in order to also to measure the effectiveness of the proposed approach to derive Curiosity-driven QG data from a standard, non-conversational, QA dataset. We randomly sampled 50 samples from the test set. Three professional English speakers were asked to evaluate the questions generated by: humans (i.e. the reference questions), and models trained using pre-training (PT) or (RL), and all combinations of those methods.
Before submitting the samples for human evaluation, the questions were shuffled. Ratings were collected on a 1-to-5 likert scale, to measure to what extent the generated questions were: answerable by looking at their context; grammatically correct; how much external knowledge is required to answer; relevant to their context; and, semantically sound. The results of the human evaluation are reported in Table TABREF33.
Discussion ::: What is the impact of the pretraining?
We observe that for pretrained models (i.e. PT and PT+RL) the Correctness is significantly higher than the models without pretraining (i.e. base and RL). It corroborates the higher BLEU observed for these models in Table TABREF30. An other observation is that the External Knowledge is lower for the pretrained models while the Relevance is slightly higher. It could be due to the nature of the pretraing for which the models learn to generate non curious questions that focus on their inputs. It correlates with the significantly higher QA_source reported in Table TABREF30 for those pretrained models.
Discussion ::: Does Reinforcement help?
From the human assessment we conducted – see Table TABREF33, we observe for the models trained with RL obtain higher scores for Relevance and lower Soundness as compared to their non-reinforced counterparts. Further, the results reported in Table TABREF30 show reinforced model obtaining lower BLEU and $QA_{source}$ source; conversely they score higher when it comes to $QA_{context}$. To summarize those results, we conclude that reinforcement brings improvements in terms of diversity of the generated questions, at the price of slightly degraded formulations in the outputs.
Discussion ::: How effective is our dataset creation methodology?
Looking at the bottom row of Table TABREF33, which shows the results obtained by the reference (i.e. human-generated) questions, we observe the highest relative score for all assessed dimensions, with the exception of Answerability. This indicates that the data we derived seem to fit well the task of Curiosity-driven question generation. As a sidenote, we remark that the models built obtain even lower scores in terms of Answerability than humans, a fact we hypothesize due to the lower quality of the generated questions: the less sound and correct, the less answerable a question would be, regardless of its context.
Discussion ::: How well do the metrics fit human judgement?
We report the pairwise Spearman correlation and p-value among all the different metrics and human measures in Figure FIGREF37. Correlation analysis on the human assessment data shows that BLEU correlates positively with Relevance, Answerability, Soundness and Unexpectedness. Self-BLEU metrics correlate significantly with Soundness and Correctness and QAcontext with Relevance. The only human measure that does not correlate significantly with any automatic metric is External knowledge. It is indeed one of the most challenging aspect to evaluate, even for humans. However, as expected, it correlates negatively with Answerability.
Conclusions
The human skill of asking inquisitive questions allows them to learn from the other and increase their knowledge. Curiosity-driven question generation could be a key component for several human-machine interaction scenarios. We thus proposed a new task: Curiosity-driven Question Generation. In absence of data directly usable for this task, we propose an automatic method to derive it from conversational QA datasets. Recognizing that the great majority of QA datasets are not dialogue-based, we also extend the method to standard QA data. Our experiments, including strategies as pretraining and reinforcement, show promising results under both automatic and human evaluation.
In future works, we plan to extend the approach to conditional generation of Curiosity-driven questions.
Computational Costs
All our experiments were run on a single nVidia 2080ti gpu. For SQuAD experiments, training time amounted to circa 45 minutes and 12 hours for the model built without and with reinforcement, respectively. The additional pretraining step took roughly 2 hours. For QuAC experiments, training time amounted to circa 2 hours and 15 hours for the models built without and with reinforcement, respectively.
Sample Outputs ::: From QuAC (test set):
Context ($P^{\prime }$):Discovery in the United KingdomThe Seekers were offered a twelve-month position as on-board entertainment on the Sitmar Line passenger cruise ship Fairsky in March 1964. In May, they travelled to the U.K. and had intended to return to Australia after staying ten weeks, but upon arrival they were offered work by a London booking agency, the Grade Organisation.Model $\Rightarrow $ Outputs:base_beam1 $\Rightarrow $ what was the name of the band ?base_beam3 $\Rightarrow $ are there any other interesting aspects about this article ?base_beam5 $\Rightarrow $ are there any other interesting aspects about this article ?RL_beam1 $\Rightarrow $ what was the name of the album ?RL_beam3 $\Rightarrow $ did they have any other albums ?RL_beam5 $\Rightarrow $ are there any other interesting aspects about this article ?Human reference:human $\Rightarrow $ what else can you tell me about thier discovery ?
Context ($P^{\prime }$):1977-1980: Death of a Ladies' Man and End of the CenturyPhillip Harvey Spector (born Harvey Phillip Spector, December 26, 1939) is an American record producer, musician, and songwriter who developed the Wall of Sound, a music production formula he described as a "Wagnerian" approach to rock and roll. Spector is considered the first auteur among musical artists for the unprecedented freedom and control he had over every phase of the recording process. Additionally, he helped engender the idea of the studio as its own distinct instrument. For these contributions, he is acknowledged as one of the most influential figures in pop music history. Model $\Rightarrow $ Outputs:base_beam1 $\Rightarrow $ what was his first album ?base_beam3 $\Rightarrow $ what happened in 1985 ?base_beam5 $\Rightarrow $ are there any other interesting aspects about this article ?RL_beam1 $\Rightarrow $ what was the name of the album ?RL_beam3 $\Rightarrow $ what was the name of the album ?RL_beam5 $\Rightarrow $ did he have any other albums ?Human reference:human $\Rightarrow $ was death of a ladies man an album ?
Sample Outputs ::: From SQuAD (test set):
Context ($P^{\prime }$):The Broncos defeated the Pittsburgh Steelers in the divisional round, 23–16, by scoring 11 points in the final three minutes of the game.Model $\Rightarrow $ Outputs:base $\Rightarrow $ who was the head of the steelers ?PT $\Rightarrow $ what was the name of the game ?RT $\Rightarrow $ when was the broncos game ?PT+RT $\Rightarrow $ what was the name of the steelers ?Human reference:human $\Rightarrow $ how many seconds were left in the game when the broncos intercepted the pass that won the game ?
Context ($P^{\prime }$):More than 1 million people are expected to attend the festivities in San Francisco during Super Bowl Week.Model $\Rightarrow $ Outputs:base $\Rightarrow $ how many people live in san diego ?PT $\Rightarrow $ how many people live in san diego ?RT $\Rightarrow $ what is the average rainfall in san diego ?PT+RT $\Rightarrow $ how many people live in san diego ?Human reference:human $\Rightarrow $ who is the mayor of san francisco ? | Through human evaluation where they are asked to evaluate the generated output on a likert scale. |
0a8bc204a76041a25cee7e9f8e2af332a17da67a | 0a8bc204a76041a25cee7e9f8e2af332a17da67a_0 | Q: What automated metrics authors investigate?
Text: Introduction
The growing interest in Machine Reading Comprehension (MRC) has sparked significant research efforts on Question Generation (QG), the dual task to Question Answering (QA). In QA, the objective is to produce an adequate response given a query and a text; conversely, for QG, the task is generally defined as generating relevant question given a source text, focusing on a specific answer span. To our knowledge, all works tackling QG have thus far focused exclusively on generating relevant questions which can be answered given the source text: for instance, given AAAI was founded in 1979 as input, a question likely to be automatically generated would be When was AAAI founded?, where the answer 1979 is a span of the input. Such questions are useful to evaluate reading comprehension for both machines BIBREF0, BIBREF1 and humans BIBREF2.
However, the human ability of asking questions goes well beyond evaluation: asking questions is essential in education BIBREF3 and has been proven to be fundamental for children cognitive development BIBREF4. Curiosity is baked into the human experience. It allows to extend one's comprehension and knowledge by asking questions that, while being relevant to context, are not directly answerable by it, thus being inquisitive and curious. The significance of such kind of questions is two-fold: first, they allow for gathering novel relevant information, e.g. a student asking for clarification; second, they are also tightly linked to one's understanding of the context, e.g. a teacher testing a student's knowledge by asking questions whose answers require a deeper understanding of the context and more complex reasoning.
From an applicative point of view, we deem the ability to generate curious, inquisitive, questions as highly beneficial for a broad range of scenarios: i) in the context of human-machine interaction (e.g. robots, chat-bots, educational tools), where the communication with the users could be more natural; ii) during the learning process itself, which could be partially driven in a self-supervised manner, reminiscent of how humans learn by exploring and interacting with their environment.
To our knowledge, this is the first paper attempting to tackle Curiosity-driven neural question generation. The contributions of this paper can be summarized as follow:
we propose a new natural language generation task: curiosity-driven question generation;
we propose a method to derive data for the task from popular non-conversational QA datasets;
we experiment using language model pre-training and reinforcement learning, on two different datasets;
we report a human evaluation analysis to assess both the pertinence of the automatic metrics used and the efficacy of the proposed dataset-creation method above.
Related Works
Deep learning models have been widely applied to text generation tasks such as machine translation BIBREF5, abstractive summarization BIBREF6 or dialog BIBREF7, providing significant gains in performance. The state of the art approaches are based on sequence to sequence models BIBREF8, BIBREF9. In recent years, significant research efforts have been directed to the tasks of Machine Reading Comprehension (MRC) and Question Answering (QA) BIBREF0, BIBREF10. The data used for tackling these tasks are usually composed of $\lbrace context, question, answer\rbrace $ triplets: given a context and the question, a model is trained to predict the answer.
Conversely, the Question Generation (QG) task introduced by BIBREF11, BIBREF12 can be considered as the dual task for QA BIBREF13: thus, given a context and (optionally) an answer, the model is trained to generate the question. Following QA, research on QG BIBREF14 has also seen increasing interest from the community. One of the main motivations is that an effective QG model can be used to generate synthetic data in order to augment existing QA datasets BIBREF15, BIBREF16. For instance, BIBREF15 proposed a reinforcement learning setup trained using a QA-based metric reward: given a paragraph and an answer, the model first generates questions; then, the paragraph and the corresponding generated questions are given to a pre-trained QA model which predicts an answer; finally, the reward is computed as the number of overlapping words between the ground truth answer and the predicted answer. For an extensive evalution of models trained with different rewards we refer the reader to BIBREF17. Most of these works followed BIBREF18, who applied reinforcement to neural machine translation. First, a sequence to sequence model is trained under teacher forcing BIBREF19 to optimize cross-entropy, hence helping to reduce the action space (i.e. the vocabulary size). Then, the model is finetuned with a mix of teacher forcing and REINFORCE BIBREF20.
For automatic evaluation, all previous works on QG resort to BLEU metrics BIBREF21, originally developed and widely used in Machine Translation. However, how to evaluate text generation models remains an open research question: BIBREF22 pointed out that, on QG tasks, the correlation between BLEU and human evaluation was poor.
A thorough investigation of the behavior of open-domain conversational agents has been recently presented by BIBREF23. Using controllable neural text generation methods, the authors control important attributes for chit-chat dialogues, including question-asking behavior. Among the take-away messages of this work, is that question-asking represents an essential component in an engaging chit-chat pipeline: the authors find, via a large-scale human validation study, that agents with higher rates of question-asking obtain qualitative improvements in terms of inquisitiveness, interestingness and engagingness.
Indeed, in a conversational setting, it can be expected that the nature of follow-up questions significantly differs from those used as target in a traditional QG training setup: as mentioned earlier, QG has so far been tackled as the dual task to QA, hence training models to generate questions whose answer is present in the input context. On the contrary, we argue that in natural conversations the questions follow the input context but are rather a mean to augment one's knowledge (thus, their answer is not present in the input context). In this work, we thus define the task as Curiosity-driven Question Generation.
Dataset
Question Answering datasets are usually composed of a set of questions associated with the corresponding answers and the reading passages (the context) containing the answer. The QA task is defined as finding the answer to a question given the context. As opposed, the Question Generation (QG) task is to generate the question given the input and (optionally) the answer. Most previous efforts on the QG task have resorted to the widely used Stanford Question Answering Dataset (SQuAD) BIBREF10. It contains roughly 100,000 questions posed by crowd-workers on selected sample of Wikipedia articles. Several other QA datasets have also been recently published accounting for characteristic such as requiring multi-passage or discrete reasoning BIBREF24, BIBREF25; further, conversational QA datasets have been made available: CoQA BIBREF26 and QuAC BIBREF27 have the desirable property to be in a dialogue-like setting.
In our scenario, Curiosity-driven QG, the reading passage associated with a question should not contain the answer, but rather pave the way for asking a new question – whose answer would eventually enrich the knowledge on the matter at hand. Therefore, a natural choice to build QG data would be to rely on existing datasets for conversational QA. A detailed comparison of the above-mentioned CoQA and QuAC datasets is provided by BIBREF28, who reports the proportion of Topic Error (questions unlikely to be asked in the context) and Entity Salad (i.e. questions unanswerable for any context): CoQA includes a significantly higher proportion Topic Error and Entity Salad compared to QuAC. For this reason, we resort to QuAC in order to derive data Curiosity-driven QG.
Furthermore, recognizing the fact that the great majority of QA datasets available does not account for conversational characteristics, we propose a methodology to derive data for Curiosity-driven Question Generation from standard QA datasets, applying it to the popular SQuAD BIBREF10.
For both our data sources, and consistently with standard QA and QG tasks, we encode each sample as a triplet $\lbrace P, q, a\rbrace $ where the paragraph $P$ comprises $n$ sentences $[s_0,..., s_n]$, and $a$ represents the answer to the question $q$. A canonical QG approach would thus use $s_a$, i.e. the sentence of $P$ that contains the answer, as source, and $q$ as generation target. On the contrary, for Curiosity-driven QG, any sentence $s_x$ from $P$ can potentially be used as the source sequence, as long as it does not contain the answer – i.e. under the necessary constraint of $x \ne a$. In the following subsections, we elaborate on additional constraints depending on the nature of the source data.
In general, we define samples as triplets
where $s_x$ and $P^{\prime }$ are, respectively, the input sentence and the paragraph $P$ modified according to the appropriate dataset-depending constraint, and $y$ is the reference (target) question.
Dataset ::: Conversational QA Data
As mentioned above, we first derive our data from the QuAC dataset, which is built from Wikipedia articles by iterating over the following procedure: given a sentence, a student annotator asks a relevant question for which he does not have the answer; then, the teacher – annotator – retrieves a sentence that contains the answer. Thus, a QuAC question is curious by design, given the text that precedes it. More formally, for the question $q$ (i.e. our target), the source $s_x$ is composed by the concatenation of the sentences of $P$ which appear before the sentence $s_a$ that contains the answer. Therefore, our QuAC-derived dataset is built by applying the stricter constraint $x < a$.
Numerically, the QuAC dataset compounds to 83,568 questions (on 11,567 articles) for the train set, 7,354 for the validation set and 7,353 for the test set (1,000 articles each). Since the test set is not public, we use the original QuAC validation set to build our test set. From the training set, we randomly drop 1,000 articles (hence, 7,224 samples) which we use to derive our validation set, thus resulting in 76,345 questions for training.
Dataset ::: Standard QA Data
Most of the available QA datasets are not conversational. Thus, we propose a simple method to obtain data for Curiosity-driven QG from standard QA datasets. For this, we use the widely popular SQuADBIBREF10, and specifically the original splits released by BIBREF11 which is commonly used for Question Generation.
As opposed to QuAC, the questions in SQuAD do not follow logical ordering. Therefore, any sentence $s_x$ from $P$ can potentially be used as the source sequence, as long as it does not contain the answer $a$ (constraint: $x \ne a$). Nonetheless, as is reasonable for factoid QA datasets, several questions are so specific to their associated sentence $s_a$ that they would be extremely unlikely to be asked without knowing the contents of $s_a$ itself.
To exemplify this issue, take the following paragraph from SQuAD:
Tesla was the fourth of five children. He had an older brother named Dane and three sisters, Milka, Angelina and Marica. Dane was killed in a horse-riding accident when Nikola was five. In 1861, Tesla attended the “Lower" or “Primary" School in Smiljan where he studied German, arithmetic, and religion. In 1862, the Tesla family moved to Gospić, Austrian Empire, where Tesla's father worked as a pastor. Nikola completed “Lower" or “Primary" School, followed by the “Lower Real Gymnasium" or “Normal School.
Given “Dane was killed in a horse-riding accident when Nikola was five." as $s_a$, and operating under the sole constraint of $x \ne a$, the sentence “Tesla was the fourth of five children" would be eligible as a source $s_x$ for the target question “What happened to Dane?". This question can only be asked if either contextual information or background knowledge is available, since it requires to know that Dane was among Tesla's four siblings.
To overcome this problem, we added an additional constraint based on Named Entity Recognition (NER): $s_x$ is an acceptable input only if all the entities present in the question $q$ are also present in the input sentence $s_x$. In the previous example, this would thus filter out the target “What happened to Dane?" while allowing for “What was Tesla's brother's name?".
For our experiments we used spaCy.
In Table TABREF10 we report the number of samples we obtained from SQuAD before and after applying NER filtering. After applying the above methodology to construct a dataset for Curiosity-driven QG, our training dataset contains 25,356 samples for training, 2,076 for development, and 2,087 for testing.
Metrics
Automatic evaluation of Natural Language Generation (NLG) systems is a challenging task BIBREF22. For QG, $n$-gram based similarity metrics are commonly used. These measures evaluate how similar the generated text is to the corresponding reference(s). While they are known to suffer from several shortcomings BIBREF29, BIBREF30, they allow to evaluate specific properties of the developed models. In this work, the metrics detailed below are proposed and we evaluate their quality through a human evaluation in subsection SECREF32.
Metrics ::: BLEU
One of the most popular metrics for QG, BLEU BIBREF21 provides a set of measures to compare automatically generated texts against one or more references. In particular, BLEU-N is based on the count of overlapping n-grams between the candidate and its corresponding reference(s).
Metrics ::: Self-BLEU
Within the field of Computational Creativity, Diversity is considered a desirable property BIBREF31. Indeed, generating always the same question such as “What is the meaning of the universe?" would be an undesirable behavior, reminiscent of the “collapse mode" observed in Generative Adversarial Networks (GAN) BIBREF32. Therefore, we adopt Self-BLEU, originally proposed by BIBREF33, as a measure of diversity for the generated text sequences. Self-BLEU is computed as follows: for each generated sentence $s_i$, a BLEU score is computed using $s_i$ as hypothesis while the other generated sentences are used as reference. When averaged over all the references, it thus provides a measure of how diverse the sentences are. Lower Self-BLEU scores indicate more diversity. We refer to these metrics as Self-B* throughout this paper.
Metrics ::: QA-based metrics
Given a text, a question can be considered curious if the answer is not contained in the input text. In our task, this implies that a question $q$ should not be answerable given its corresponding input sentence $s_x$. Thanks to the recent improvements obtained on Question Answering tasks – for instance, human-level performance has been achieved on SQuAD-v1 – the answerability of a question can be automatically measured.
Therefore, given a question-context pair as input to a QA model, two type of metrics can be computed:
n-gram based score: measuring the average overlap between the retrieved answer and the ground truth.
probability score: the confidence of the QA model for its retrieved answer; this corresponds to the probability of being the correct answer assigned by the QA model to the retrieved answer.
Since several diverse questions can be generated for a given input, we consider the latter metric (probability score) to better fit the Curiosity-driven QG task.
Hence, given the evaluated question $q$ and the input text $s_x$, we define a metric QA_prob as the confidence of the QA model that its predicted answer is correct. This metric measures answerability of $q$ given $s_x$: therefore, the lower this score, the less likely the answer is contained in the input text.
While being non-answerable represents a necessary condition for $q$ being a curious question with respect to its context $s_x$, we also want $q$ to be as relevant and useful as possible. To this end, we compute the above QA_prob for question $q$ on $P^{\prime }$, which represents the source paragraph stripped from the sentence containing the answer (see Eq. DISPLAY_FORM6). The higher this score, the more likely the question is relevant and useful to augment the knowledge provided by $s_x$.
Thus, the two proposed metrics are defined as
and
Under our definition, Curiosity-driven questions are those that minimize $QA_{source}$ while maximizing $QA_{context}$. To compute these QA-based metrics, we use the HuggingFace implementation of BERT BIBREF34.
Experiments ::: Baseline model
As baseline architecture we adopt the popular Transformer BIBREF35, which proved to perform well on a wide range of text generation tasks. Among these, neural machine translation BIBREF36, automatic summarization BIBREF37, and question generation BIBREF38, BIBREF39. It can be briefly described as a sequence-to-sequence model with a symmetric encoder and decoder based on a self-attention mechanism, which allows to overcome the inherent obstacles to parallelism present in recurrent models such as Long Short Time Memory (LSTM) networks BIBREF40.
The copy mechanism BIBREF41 proved beneficial for QG BIBREF42, BIBREF39: indeed, the QG task is very sensitive to rare and out of vocabulary words such as named entities and such a mechanism help deal with it efficiently: more than 50% of the answers in the SQuAD dataset, for instance, correspond to named entities (see Table 2 in BIBREF10. Hence, following BIBREF37, BIBREF39, we include a copy mechanism in our Transformer architecture.
For our experiments, we used the following hyper-parameters for the transformer: N = 2 (number of blocks); d_model = 256 (hidden state dimension); d_ff = 512 (position-wise feed-forward networks dimension); and, h = 2 (number of attention heads).
Experiments run with the original hyper-parameters as proposed by BIBREF35 obtained consistent and numerically similar results. During training, we used mini batches of size 64 and the Adam optimizer BIBREF43. At generation time, the decoding steps are computed trough the beam search algorithm with $k=5$ beams by default.
Experiments ::: Reinforcement
Reinforcement Learning (RL) is an efficient technique to maximize discrete metrics for text generation. Previously, BIBREF18 used the REINFORCE algorithm BIBREF20 to train RNNs for several generation tasks, showing improvements over previous supervised approaches. Moreover, BIBREF29 combined supervised and reinforcement learning, demonstrating improvements over competing approaches both in terms of ROUGE and on human evaluation.
However, the metrics used as reward are often overfit, leading to numerical improvements which do not translate to increased – and, rather, contribute to degrading – output quality, thus leading to reduced effectiveness of the trained models for practical applications. On this matter, and with a particular focus on QG, BIBREF17 performed a human evaluation on RL models trained with several metrics as reward, finding them to be indeed poorly aligned with human judgments: the models appear to learn to exploit the weaknesses of the reward source.
To overcome this issue, we propose to use a balanced reward:
thus maximizing the probability of finding an answer to the generated question within the input paragraph but not inside the source sentence.
In our experiments, we follow the approach proposed by BIBREF18, BIBREF29, considering a mixed loss $L_{ml+rl}$ which combines supervised and reinforcement learning schemes:
where the maximum likelihood $L_{ml}$ is defined as
where $X=[x_1,...,x_n]$ represents the source text of length $n$ and $Y=[y_1,...,y_m]$ the corresponding reference question of length $m$.
Conversely, we define the reinforcement loss $L_{rl}$ to be minimized according to the standard RL actor-critic scheme, where $r(q, P, P^{\prime })$ is the reward function defined in DISPLAY_FORM23:
Greedy decoding according to the conditional distribution $p(y|X)$ is used to obtain a sequence $\widehat{Y}$. The model is sampled using its Markov property, that is, one token at a time, giving rise to the sequence $Y^s$.
Experiments ::: Pretraining (PT)
As shown in Table TABREF10, the constrained dataset amounts to roughly three times less samples than both QuAC and the original SQuAD dataset it derives from. We thus investigate, for this dataset, the effect of pretraining the model under the traditional (i.e. not Curiosity-driven) QG training setup, using the training set as provided by BIBREF11). Then we resume training on the final dataset obtained after applying the NER-based constraint for Curiosity-driven QG on the same training samples.
For the QuAC Curiosity-driven dataset, the amount of data is comparable to the original dataset, given the conversational nature of QuAC. Therefore, we do not use pretraining for the experiments on QuAC.
Results ::: Automatic metrics
In Table TABREF29 we report the results of our experiments on QuAC for the baseline model (base) and the RL model. We use a beam $k$, and compute the results for $k=[1,3,5]$. In addition the generated questions with a beam $k=5$, we also computed the results for $k=1$ and $k=3$. While one would expect to see for all the metrics a slight improvement, with increasing beam size, we observe a strong divergence among the results: increasing values for $k$ correspond to a significant improvements in terms of BLEU-4 and notable drops for BLEU-1. A similar phenomena was observed by BIBREF44 in the context of machine translation: in this work, the presence of 1 or 2% of noisy data is found to be enough to significantly degrade the beam search results. In our case, one of most frequent generated question is Are there any other interesting aspects about this article ?. Indeed, the frequency of this question in our training set amounts to 4.18% of the questions. On the test set we see that roughly 80% of the generated questions start with the token “are" . Generating this sequence is not very likely with a greedy search ($k=1$): at any time step during the generation, if any other token has a higher probability, this question will be dismissed. On the other hand, with a higher beam, it is likely to be kept and eventually result as the most probable sequence, among the different remaining beams at the end of the inference.
Moving to our SQuAD-based experiments, we observe that the models trained on SQuAD do not seem to suffer from this issue since all the metrics improved when increasing the beam size from $k=1$ to $k=5$. This is consistent with the results reported by BIBREF42 where improving the beam improve slightly all the metrics. Thus, we only report the results with $k=5$ in Table TABREF30. A possible explanation is that SQuAD, as opposed to QuAC, only contains factoid questions.
We observe that the models trained with RL obtain, as could be expected, higher scores for QAcontext with respect to those trained without RL. A higher QAcontext implies that the QA model is more likely to find an answer in the near context of the source. QAsource is lower, as expected, for SQuAD based models, though comparatively higher than the models trained with RL on QuAC. We identify two possible reasons for this: first, the QA model is trained on answerable questions; second, the nature of the QUaC questions is less factoid than the SQuAD ones, and non-factoid questions can arguably be harder for the QA model to evaluate. This could explain why, in the RL setting, QAcontext (the evaluation on answerable questions) is higher for both SQuAD and QUaC models, but only SQuAD models achieve a lower QA_source (the evaluation on non answerable questions).
Furthermore, we see that pretraining allows to achieve higher BLEU scores, at the cost of lower Self-BLEU, thus showing an increased accuracy but less diversity in the generated questions. Indeed, we find that pretrained models tend to generate a higher number of questions starting with “What” compared to both other models and the references; the distribution for the first words of the human questions appears closer to that non pretrained models.
In Figure FIGREF31 we report the distribution of the first word frequency for the different models trained: the models without pretraining appear closer to the human-quality samples and also show more diversity.
Results ::: Human Evaluation
In addition to the automatic metrics, we proceeded to a human evaluation. We chose to use the data from our SQuAD-based experiments in order to also to measure the effectiveness of the proposed approach to derive Curiosity-driven QG data from a standard, non-conversational, QA dataset. We randomly sampled 50 samples from the test set. Three professional English speakers were asked to evaluate the questions generated by: humans (i.e. the reference questions), and models trained using pre-training (PT) or (RL), and all combinations of those methods.
Before submitting the samples for human evaluation, the questions were shuffled. Ratings were collected on a 1-to-5 likert scale, to measure to what extent the generated questions were: answerable by looking at their context; grammatically correct; how much external knowledge is required to answer; relevant to their context; and, semantically sound. The results of the human evaluation are reported in Table TABREF33.
Discussion ::: What is the impact of the pretraining?
We observe that for pretrained models (i.e. PT and PT+RL) the Correctness is significantly higher than the models without pretraining (i.e. base and RL). It corroborates the higher BLEU observed for these models in Table TABREF30. An other observation is that the External Knowledge is lower for the pretrained models while the Relevance is slightly higher. It could be due to the nature of the pretraing for which the models learn to generate non curious questions that focus on their inputs. It correlates with the significantly higher QA_source reported in Table TABREF30 for those pretrained models.
Discussion ::: Does Reinforcement help?
From the human assessment we conducted – see Table TABREF33, we observe for the models trained with RL obtain higher scores for Relevance and lower Soundness as compared to their non-reinforced counterparts. Further, the results reported in Table TABREF30 show reinforced model obtaining lower BLEU and $QA_{source}$ source; conversely they score higher when it comes to $QA_{context}$. To summarize those results, we conclude that reinforcement brings improvements in terms of diversity of the generated questions, at the price of slightly degraded formulations in the outputs.
Discussion ::: How effective is our dataset creation methodology?
Looking at the bottom row of Table TABREF33, which shows the results obtained by the reference (i.e. human-generated) questions, we observe the highest relative score for all assessed dimensions, with the exception of Answerability. This indicates that the data we derived seem to fit well the task of Curiosity-driven question generation. As a sidenote, we remark that the models built obtain even lower scores in terms of Answerability than humans, a fact we hypothesize due to the lower quality of the generated questions: the less sound and correct, the less answerable a question would be, regardless of its context.
Discussion ::: How well do the metrics fit human judgement?
We report the pairwise Spearman correlation and p-value among all the different metrics and human measures in Figure FIGREF37. Correlation analysis on the human assessment data shows that BLEU correlates positively with Relevance, Answerability, Soundness and Unexpectedness. Self-BLEU metrics correlate significantly with Soundness and Correctness and QAcontext with Relevance. The only human measure that does not correlate significantly with any automatic metric is External knowledge. It is indeed one of the most challenging aspect to evaluate, even for humans. However, as expected, it correlates negatively with Answerability.
Conclusions
The human skill of asking inquisitive questions allows them to learn from the other and increase their knowledge. Curiosity-driven question generation could be a key component for several human-machine interaction scenarios. We thus proposed a new task: Curiosity-driven Question Generation. In absence of data directly usable for this task, we propose an automatic method to derive it from conversational QA datasets. Recognizing that the great majority of QA datasets are not dialogue-based, we also extend the method to standard QA data. Our experiments, including strategies as pretraining and reinforcement, show promising results under both automatic and human evaluation.
In future works, we plan to extend the approach to conditional generation of Curiosity-driven questions.
Computational Costs
All our experiments were run on a single nVidia 2080ti gpu. For SQuAD experiments, training time amounted to circa 45 minutes and 12 hours for the model built without and with reinforcement, respectively. The additional pretraining step took roughly 2 hours. For QuAC experiments, training time amounted to circa 2 hours and 15 hours for the models built without and with reinforcement, respectively.
Sample Outputs ::: From QuAC (test set):
Context ($P^{\prime }$):Discovery in the United KingdomThe Seekers were offered a twelve-month position as on-board entertainment on the Sitmar Line passenger cruise ship Fairsky in March 1964. In May, they travelled to the U.K. and had intended to return to Australia after staying ten weeks, but upon arrival they were offered work by a London booking agency, the Grade Organisation.Model $\Rightarrow $ Outputs:base_beam1 $\Rightarrow $ what was the name of the band ?base_beam3 $\Rightarrow $ are there any other interesting aspects about this article ?base_beam5 $\Rightarrow $ are there any other interesting aspects about this article ?RL_beam1 $\Rightarrow $ what was the name of the album ?RL_beam3 $\Rightarrow $ did they have any other albums ?RL_beam5 $\Rightarrow $ are there any other interesting aspects about this article ?Human reference:human $\Rightarrow $ what else can you tell me about thier discovery ?
Context ($P^{\prime }$):1977-1980: Death of a Ladies' Man and End of the CenturyPhillip Harvey Spector (born Harvey Phillip Spector, December 26, 1939) is an American record producer, musician, and songwriter who developed the Wall of Sound, a music production formula he described as a "Wagnerian" approach to rock and roll. Spector is considered the first auteur among musical artists for the unprecedented freedom and control he had over every phase of the recording process. Additionally, he helped engender the idea of the studio as its own distinct instrument. For these contributions, he is acknowledged as one of the most influential figures in pop music history. Model $\Rightarrow $ Outputs:base_beam1 $\Rightarrow $ what was his first album ?base_beam3 $\Rightarrow $ what happened in 1985 ?base_beam5 $\Rightarrow $ are there any other interesting aspects about this article ?RL_beam1 $\Rightarrow $ what was the name of the album ?RL_beam3 $\Rightarrow $ what was the name of the album ?RL_beam5 $\Rightarrow $ did he have any other albums ?Human reference:human $\Rightarrow $ was death of a ladies man an album ?
Sample Outputs ::: From SQuAD (test set):
Context ($P^{\prime }$):The Broncos defeated the Pittsburgh Steelers in the divisional round, 23–16, by scoring 11 points in the final three minutes of the game.Model $\Rightarrow $ Outputs:base $\Rightarrow $ who was the head of the steelers ?PT $\Rightarrow $ what was the name of the game ?RT $\Rightarrow $ when was the broncos game ?PT+RT $\Rightarrow $ what was the name of the steelers ?Human reference:human $\Rightarrow $ how many seconds were left in the game when the broncos intercepted the pass that won the game ?
Context ($P^{\prime }$):More than 1 million people are expected to attend the festivities in San Francisco during Super Bowl Week.Model $\Rightarrow $ Outputs:base $\Rightarrow $ how many people live in san diego ?PT $\Rightarrow $ how many people live in san diego ?RT $\Rightarrow $ what is the average rainfall in san diego ?PT+RT $\Rightarrow $ how many people live in san diego ?Human reference:human $\Rightarrow $ who is the mayor of san francisco ? | BLEU, Self-BLEU, n-gram based score, probability score |
81686454f215e28987c7ad00ddce5ffe84b37195 | 81686454f215e28987c7ad00ddce5ffe84b37195_0 | Q: What supervised models are experimented with?
Text: Introduction
NLP can be extremely useful for enabling scientific inquiry, helping us to quickly and efficiently understand large corpora, gather evidence, and test hypotheses BIBREF0 , BIBREF1 . One domain for which automated analysis is particularly useful is Internet security: researchers obtain large amounts of text data pertinent to active threats or ongoing cybercriminal activity, for which the ability to rapidly characterize that text and draw conclusions can reap major benefits BIBREF2 , BIBREF3 . However, conducting automatic analysis is difficult because this data is out-of-domain for conventional NLP models, which harms the performance of both discrete models BIBREF4 and deep models BIBREF5 . Not only that, we show that data from one cybercrime forum is even out of domain with respect to another cybercrime forum, making this data especially challenging.
In this work, we present the task of identifying products being bought and sold in the marketplace sections of these online cybercrime forums. We define a token-level annotation task where, for each post, we annotate references to the product or products being bought or sold in that post. Having the ability to automatically tag posts in this way lets us characterize the composition of a forum in terms of what products it deals with, identify trends over time, associate users with particular activity profiles, and connect to price information to better understand the marketplace. Some of these analyses only require post-level information (what is the product being bought or sold in this post?) whereas other analyses might require token-level references; we annotate at the token level to make our annotation as general as possible. Our dataset has already proven enabling for case studies on these particular forums BIBREF6 , including a study of marketplace activity on bulk hacked accounts versus users selling their own accounts.
Our task has similarities to both slot-filling information extraction (with provenance information) as well as standard named-entity recognition (NER). Compared to NER, our task features a higher dependence on context: we only care about the specific product being bought or sold in a post, not other products that might be mentioned. Moreover, because we are operating over forums, the data is substantially messier than classical NER corpora like CoNLL BIBREF7 . While prior work has dealt with these messy characteristics for syntax BIBREF8 and for discourse BIBREF9 , BIBREF10 , BIBREF11 , our work is the first to tackle forum data (and marketplace forums specifically) from an information extraction perspective.
Having annotated a dataset, we examine supervised and semi-supervised learning approaches to the product extraction problem. Binary or CRF classification of tokens as products is effective, but performance drops off precipitously when a system trained on one forum is applied to a different forum: in this sense, even two different cybercrime forums seem to represent different “fine-grained domains.” Since we want to avoid having to annotate data for every new forum that might need to be analyzed, we explore several methods for adaptation, mixing type-level annotation BIBREF12 , BIBREF13 , token-level annotation BIBREF14 , and semi-supervised approaches BIBREF15 , BIBREF16 . We find little improvement from these methods and discuss why they fail to have a larger impact.
Overall, our results characterize the challenges of our fine-grained domain adaptation problem in online marketplace data. We believe that this new dataset provides a useful testbed for additional inquiry and investigation into modeling of fine-grained domain differences.
Dataset and Annotation
We consider several forums that vary in the nature of products being traded:
Table TABREF3 gives some statistics of these forums. These are the same forums used to study product activity in PortnoffEtAl2017. We collected all available posts and annotated a subset of them. In total, we annotated 130,336 tokens; accounting for multiple annotators, our annotators considered 478,176 tokens in the process of labeling the data.
Figure FIGREF2 shows two examples of posts from Darkode. In addition to aspects of the annotation, which we describe below, we see that the text exhibits common features of web text: abbreviations, ungrammaticality, spelling errors, and visual formatting, particularly in thread titles. Also, note how some words that are not products here might be in other contexts (e.g., Exploits).
Annotation Process
We developed our annotation guidelines through six preliminary rounds of annotation, covering 560 posts. Each round was followed by discussion and resolution of every post with disagreements. We benefited from members of our team who brought extensive domain expertise to the task. As well as refining the annotation guidelines, the development process trained annotators who were not security experts. The data annotated during this process is not included in Table TABREF3 .
Once we had defined the annotation standard, we annotated datasets from Darkode, Hack Forums, Blackhat, and Nulled as described in Table TABREF3 . Three people annotated every post in the Darkode training, Hack Forums training, Blackhat test, and Nulled test sets; these annotations were then merged into a final annotation by majority vote. The development and test sets for Darkode and Hack Forums were annotated by additional team members (five for Darkode, one for Hack Forums), and then every disagreement was discussed and resolved to produce a final annotation. The authors, who are researchers in either NLP or computer security, did all of the annotation.
We preprocessed the data using the tokenizer and sentence-splitter from the Stanford CoreNLP toolkit BIBREF17 . Note that many sentences in the data are already delimited by line breaks, making the sentence-splitting task much easier. We performed annotation on the tokenized data so that annotations would be consistent with surrounding punctuation and hyphenated words.
Our full annotation guide is available with our data release. Our basic annotation principle is to annotate tokens when they are either the product that will be delivered or are an integral part of the method leading to the delivery of that product. Figure FIGREF2 shows examples of this for a deliverable product (bot) as well as a service (cleaning). Both a product and service may be annotated in a single example: for a post asking to hack an account, hack is the method and the deliverable is the account, so both are annotated. In general, methods expressed as verbs may be annotated in addition to nominal references.
When the product is a multiword expression (e.g., Backconnect bot), it is almost exclusively a noun phrase, in which case we annotate the head word of the noun phrase (bot). Annotating single tokens instead of spans meant that we avoided having to agree on an exact parse of each post, since even the boundaries of base noun phrases can be quite difficult to agree on in ungrammatical text.
If multiple different products are being bought or sold, we annotate them all. We do not annotate:
Features of products
Generic product references, e.g., this, them
Product mentions inside “vouches” (reviews from other users)
Product mentions outside of the first and last 10 lines of each post
Table TABREF3 shows inter-annotator agreement according to our annotation scheme. We use the Fleiss' Kappa measurement BIBREF18 , treating our task as a token-level annotation where every token is annotated as either a product or not. We chose this measure as we are interested in agreement between more than two annotators (ruling out Cohen's kappa), have a binary assignment (ruling out correlation coefficients) and have datasets large enough that the biases Krippendorff's Alpha addresses are not a concern. The values indicate reasonable agreement.
Discussion
Because we annotate entities in a context-sensitive way (i.e., only annotating those in product context), our task resembles a post-level information extraction task. The product information in a post can be thought of as a list-valued slot to be filled in the style of TAC KBP BIBREF19 , BIBREF20 , with the token-level annotations constituting provenance information. However, we chose to anchor the task fully at the token level to simplify the annotation task: at the post level, we would have to decide whether two distinct product mentions were actually distinct products or not, which requires heavier domain knowledge. Our approach also resembles the fully token-level annotations of entity and event information in the ACE dataset BIBREF21 .
Evaluation Metrics
In light of the various views on this task and its different requirements for different potential applications, we describe and motivate a few distinct evaluation metrics below. The choice of metric will impact system design, as we discuss in the following sections.
Phrase-level Evaluation
Another axis of variation in metrics comes from whether we consider token-level or phrase-level outputs. As noted in the previous section, we did not annotate noun phrases, but we may actually be interested in identifying them. In Figure FIGREF2 , for example, extracting Backconnect bot is more useful than extracting bot in isolation, since bot is a less specific characterization of the product.
We can convert our token-level annotations to phrase-level annotations by projecting our annotations to the noun phrase level based on the output of an automatic parser. We used the parser of ChenManning2014 to parse all sentences of each post. For each annotated token that was given a nominal tag (N*), we projected that token to the largest NP containing it of length less than or equal to 7; most product NPs are shorter than this, and when the parser predicts a longer NP, our analysis found that it typically reflects a mistake. In Figure FIGREF2 , the entire noun phrase Backconnect bot would be labeled as a product. For products realized as verbs (e.g., hack), we leave the annotation as the single token.
Throughout the rest of this work, we will evaluate sometimes at the token-level and sometimes at the NP-level (including for the product type evaluation and post-level accuracy); we will specify which evaluation is used where.
Models
We consider several baselines for product extraction, two supervised learning-based methods (here), and semi-supervised methods (Section SECREF5 ).
Basic Results
Table TABREF30 shows development set results on Darkode for each of the four systems for each metric described in Section SECREF3 . Our learning-based systems substantially outperform the baselines on the metrics they are optimized for. The post-level system underperforms the binary classifier on the token evaluation, but is superior at not only post-level accuracy but also product type F INLINEFORM0 . This lends credence to our hypothesis that picking one product suffices to characterize a large fraction of posts. Comparing the automatic systems with human annotator performance we see a substantial gap. Note that our best annotator's token F INLINEFORM1 was 89.8, and NP post accuracy was 100%; a careful, well-trained annotator can achieve very high performance, indicating a high skyline.
The noun phrase metric appears to be generally more forgiving, since token distinctions within noun phrases are erased. The post-level NP system achieves an F-score of 78 on product type identification, and post-level accuracy is around 88%. While there is room for improvement, this system is accurate enough to enable analysis of Darkode with automatic annotation.
Throughout the rest of this work, we focus on NP-level evaluation and post-level NP accuracy.
Domain Adaptation
Table TABREF30 only showed results for training and evaluating within the same forum (Darkode). However, we wish to apply our system to extract product occurrences from a wide variety of forums, so we are interested in how well the system will generalize to a new forum. Tables TABREF33 and TABREF38 show full results of several systems in within-forum and cross-forum evaluation settings. Performance is severely degraded in the cross-forum setting compared to the within-forum setting, e.g., on NP-level F INLINEFORM0 , a Hack Forums-trained model is 14.6 F INLINEFORM1 worse at the Darkode task than a Darkode-trained model (61.2 vs. 75.8). Differences in how the systems adapt between different forums will be explored more thoroughly in Section SECREF43 .
In the next few sections, we explore several possible methods for improving results in the cross-forum settings and attempting to build a more domain-general system. These techniques generally reflect two possible hypotheses about the source of the cross-domain challenges:
Brown Clusters
To test Hypothesis 1, we investigate whether additional lexical information helps identify product-like words in new domains. A classic semi-supervised technique for exploiting unlabeled target data is to fire features over word clusters or word vectors BIBREF15 . These features should generalize well across domains that the clusters are formed on: if product nouns occur in similar contexts across domains and therefore wind up in the same cluster, then a model trained on domain-limited data should be able to learn that that cluster identity is indicative of products.
We form Brown clusters on our unlabeled data from both Darkode and Hack Forums (see Table TABREF3 for sizes). We use Liang2005's implementation to learn 50 clusters. Upon inspection, these clusters do indeed capture some of the semantics relevant to the problem: for example, the cluster 110 has as its most frequent members service, account, price, time, crypter, and server, many of which are product-associated nouns. We incorporate these as features into our model by characterizing each token with prefixes of the Brown cluster ID; we used prefixes of length 2, 4, and 6.
Tables TABREF33 and TABREF38 show the results of incorporating Brown cluster features into our trained models. These features do not lead to statistically-significant gains in either NP-level F INLINEFORM0 or post-level accuracy, despite small improvements in some cases. This indicates that Brown clusters might be a useful feature sometimes, but do not solve the domain adaptation problem in this context.
Type-level Annotation
Another approach following Hypothesis 1 is to use small amounts of supervised data, One cheap approach for annotating data in a new domain is to exploit type-level annotation BIBREF12 , BIBREF13 . Our token-level annotation standard is relatively complex to learn, but a researcher could quite easily provide a few exemplar products for a new forum based on just a few minutes of reading posts and analyzing the forum.
Given the data that we've already annotated, we can simulate this process by iterating through our labeled data and collecting annotated product names that are sufficiently common. Specifically, we take all (lowercased, stemmed) product tokens and keep those occurring at least 4 times in the training dataset (recall that these datasets are INLINEFORM0 700 posts). This gives us a list of 121 products in Darkode and 105 products in Hack Forums.
To incorporate this information into our system, we add a new feature on each token indicating whether or not it occurs in the gazetteer. At training time, we use the gazetteer scraped from the training set. At test time, we use the gazetteer from the target domain as a form of partial type-level supervision. Tables TABREF33 and TABREF38 shows the results of incorporating the gazetteer into the system. Gazetteers seem to provide somewhat consistent gains in cross-domain settings, though many of these individual improvements are not statistically significant, and the gazetteers can sometimes hurt performance when testing on the same domain the system was trained on.
Token-level Annotation
We now turn our attention to methods that might address Hypothesis 2. If we assume the domain transfer problem is more complex, we really want to leverage labeled data in the target domain rather than attempting to transfer features based only on type-level information. Specifically, we are interested in cases where a relatively small number of labeled posts (less than 100) might provide substantial benefit to the adaptation; a researcher could plausibly do this annotation in a few hours.
We consider two ways of exploiting labeled target-domain data. The first is to simply take these posts as additional training data. The second is to also employ the “frustratingly easy” domain adaptation method of Daume2007. In this framework, each feature fired in our model is actually fired twice: one copy is domain-general and one is conjoined with the domain label (here, the name of the forum). In doing so, the model should gain some ability to separate domain-general from domain-specific feature values, with regularization encouraging the domain-general feature to explain as much of the phenomenon as possible. For both training methods, we upweight the contribution of the target-domain posts in the objective by a factor of 5.
Figure FIGREF41 shows learning curves for both of these methods in two adaptation settings as we vary the amount of labeled target-domain data. The system trained on Hack Forums is able to make good use of labeled data from Darkode: having access to 20 labeled posts leads to gains of roughly 7 F INLINEFORM0 . Interestingly, the system trained on Darkode is not able to make good use of labeled data from Hack Forums, and the domain-specific features actually cause a drop in performance until we include a substantial amount of data from Hack Forums (at least 80 posts). We are likely overfitting the small Hack Forums training set with the domain-specific features.
Analysis
In order to understand the variable performance and shortcomings of the domain adaptation approaches we explored, it is useful to examine our two initial hypotheses and characterize the datasets a bit further. To do so, we break down system performance on products seen in the training set versus novel products. Because our systems depend on lexical and character INLINEFORM0 -gram features, we expect that they will do better at predicting products we have seen before.
Table TABREF39 confirms this intuition: it shows product out-of-vocabulary rates in each of the four forums relative to training on both Darkode and Hack Forums, along with recall of an NP-level system on both previously seen and OOV products. As expected, performance is substantially higher on in-vocabulary products. OOV rates of a Darkode-trained system are generally lower on new forums, indicating that that forum has better all-around product coverage. A system trained on Darkode is therefore in some sense more domain-general than one trained on Hack Forums.
This would seem to support Hypothesis 1. Moreover, Table TABREF33 shows that the Hack Forums-trained system achieves a 21% error reduction on Hack Forums compared to a Darkode-trained system, while a Darkode-trained system obtains a 38% error reduction on Darkode relative to a Hack Forums-trained system; this greater error reduction means that Darkode has better coverage of Hack Forums than vice versa. Darkode's better product coverage also helps explain why Section SECREF40 showed better performance of adapting Hack Forums to Darkode than the other way around: augmenting Hack Forums data with a few posts from Darkode can give critical knowledge about new products, but this is less true if the forums are reversed. Duplicating features and adding parameters to the learner also has less of a clear benefit when adapting from Darkode, when the types of knowledge that need to be added are less concrete.
Note, however, that these results do not tell the full story. Table TABREF39 reports recall values, but not all systems have the same precision/recall tradeoff: although they were tuned to balance precision and recall on their respective development sets, the Hack Forums-trained system is slightly more precision-oriented on Nulled than the Darkode-trained system. In fact, Table TABREF33 shows that the Hack Forums-trained system actually performs better on Nulled, largely due to better performance on previously-seen products. This indicates that there is some truth to Hypothesis 2: product coverage is not the only important factor determining performance.
Conclusion
We present a new dataset of posts from cybercrime marketplaces annotated with product references, a task which blends IE and NER. Learning-based methods degrade in performance when applied to new forums, and while we explore methods for fine-grained domain adaption in this data, effective methods for this task are still an open question.
Our datasets used in this work are available at https://evidencebasedsecurity.org/forums/ Code for the product extractor can be found at https://github.com/ccied/ugforum-analysis/tree/master/extract-product
Acknowledgments
This work was supported in part by the National Science Foundation under grants CNS-1237265 and CNS-1619620, by the Office of Naval Research under MURI grant N000140911081, by the Center for Long-Term Cybersecurity and by gifts from Google. We thank all the people that provided us with forum data for our analysis; in particular Scraping Hub and SRI for their assistance in collecting data for this study. Any opinions, findings, and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors. | Unanswerable |
fc06502fa62803b62f6fd84265bfcfb207c1113b | fc06502fa62803b62f6fd84265bfcfb207c1113b_0 | Q: Who annotated the data?
Text: Introduction
NLP can be extremely useful for enabling scientific inquiry, helping us to quickly and efficiently understand large corpora, gather evidence, and test hypotheses BIBREF0 , BIBREF1 . One domain for which automated analysis is particularly useful is Internet security: researchers obtain large amounts of text data pertinent to active threats or ongoing cybercriminal activity, for which the ability to rapidly characterize that text and draw conclusions can reap major benefits BIBREF2 , BIBREF3 . However, conducting automatic analysis is difficult because this data is out-of-domain for conventional NLP models, which harms the performance of both discrete models BIBREF4 and deep models BIBREF5 . Not only that, we show that data from one cybercrime forum is even out of domain with respect to another cybercrime forum, making this data especially challenging.
In this work, we present the task of identifying products being bought and sold in the marketplace sections of these online cybercrime forums. We define a token-level annotation task where, for each post, we annotate references to the product or products being bought or sold in that post. Having the ability to automatically tag posts in this way lets us characterize the composition of a forum in terms of what products it deals with, identify trends over time, associate users with particular activity profiles, and connect to price information to better understand the marketplace. Some of these analyses only require post-level information (what is the product being bought or sold in this post?) whereas other analyses might require token-level references; we annotate at the token level to make our annotation as general as possible. Our dataset has already proven enabling for case studies on these particular forums BIBREF6 , including a study of marketplace activity on bulk hacked accounts versus users selling their own accounts.
Our task has similarities to both slot-filling information extraction (with provenance information) as well as standard named-entity recognition (NER). Compared to NER, our task features a higher dependence on context: we only care about the specific product being bought or sold in a post, not other products that might be mentioned. Moreover, because we are operating over forums, the data is substantially messier than classical NER corpora like CoNLL BIBREF7 . While prior work has dealt with these messy characteristics for syntax BIBREF8 and for discourse BIBREF9 , BIBREF10 , BIBREF11 , our work is the first to tackle forum data (and marketplace forums specifically) from an information extraction perspective.
Having annotated a dataset, we examine supervised and semi-supervised learning approaches to the product extraction problem. Binary or CRF classification of tokens as products is effective, but performance drops off precipitously when a system trained on one forum is applied to a different forum: in this sense, even two different cybercrime forums seem to represent different “fine-grained domains.” Since we want to avoid having to annotate data for every new forum that might need to be analyzed, we explore several methods for adaptation, mixing type-level annotation BIBREF12 , BIBREF13 , token-level annotation BIBREF14 , and semi-supervised approaches BIBREF15 , BIBREF16 . We find little improvement from these methods and discuss why they fail to have a larger impact.
Overall, our results characterize the challenges of our fine-grained domain adaptation problem in online marketplace data. We believe that this new dataset provides a useful testbed for additional inquiry and investigation into modeling of fine-grained domain differences.
Dataset and Annotation
We consider several forums that vary in the nature of products being traded:
Table TABREF3 gives some statistics of these forums. These are the same forums used to study product activity in PortnoffEtAl2017. We collected all available posts and annotated a subset of them. In total, we annotated 130,336 tokens; accounting for multiple annotators, our annotators considered 478,176 tokens in the process of labeling the data.
Figure FIGREF2 shows two examples of posts from Darkode. In addition to aspects of the annotation, which we describe below, we see that the text exhibits common features of web text: abbreviations, ungrammaticality, spelling errors, and visual formatting, particularly in thread titles. Also, note how some words that are not products here might be in other contexts (e.g., Exploits).
Annotation Process
We developed our annotation guidelines through six preliminary rounds of annotation, covering 560 posts. Each round was followed by discussion and resolution of every post with disagreements. We benefited from members of our team who brought extensive domain expertise to the task. As well as refining the annotation guidelines, the development process trained annotators who were not security experts. The data annotated during this process is not included in Table TABREF3 .
Once we had defined the annotation standard, we annotated datasets from Darkode, Hack Forums, Blackhat, and Nulled as described in Table TABREF3 . Three people annotated every post in the Darkode training, Hack Forums training, Blackhat test, and Nulled test sets; these annotations were then merged into a final annotation by majority vote. The development and test sets for Darkode and Hack Forums were annotated by additional team members (five for Darkode, one for Hack Forums), and then every disagreement was discussed and resolved to produce a final annotation. The authors, who are researchers in either NLP or computer security, did all of the annotation.
We preprocessed the data using the tokenizer and sentence-splitter from the Stanford CoreNLP toolkit BIBREF17 . Note that many sentences in the data are already delimited by line breaks, making the sentence-splitting task much easier. We performed annotation on the tokenized data so that annotations would be consistent with surrounding punctuation and hyphenated words.
Our full annotation guide is available with our data release. Our basic annotation principle is to annotate tokens when they are either the product that will be delivered or are an integral part of the method leading to the delivery of that product. Figure FIGREF2 shows examples of this for a deliverable product (bot) as well as a service (cleaning). Both a product and service may be annotated in a single example: for a post asking to hack an account, hack is the method and the deliverable is the account, so both are annotated. In general, methods expressed as verbs may be annotated in addition to nominal references.
When the product is a multiword expression (e.g., Backconnect bot), it is almost exclusively a noun phrase, in which case we annotate the head word of the noun phrase (bot). Annotating single tokens instead of spans meant that we avoided having to agree on an exact parse of each post, since even the boundaries of base noun phrases can be quite difficult to agree on in ungrammatical text.
If multiple different products are being bought or sold, we annotate them all. We do not annotate:
Features of products
Generic product references, e.g., this, them
Product mentions inside “vouches” (reviews from other users)
Product mentions outside of the first and last 10 lines of each post
Table TABREF3 shows inter-annotator agreement according to our annotation scheme. We use the Fleiss' Kappa measurement BIBREF18 , treating our task as a token-level annotation where every token is annotated as either a product or not. We chose this measure as we are interested in agreement between more than two annotators (ruling out Cohen's kappa), have a binary assignment (ruling out correlation coefficients) and have datasets large enough that the biases Krippendorff's Alpha addresses are not a concern. The values indicate reasonable agreement.
Discussion
Because we annotate entities in a context-sensitive way (i.e., only annotating those in product context), our task resembles a post-level information extraction task. The product information in a post can be thought of as a list-valued slot to be filled in the style of TAC KBP BIBREF19 , BIBREF20 , with the token-level annotations constituting provenance information. However, we chose to anchor the task fully at the token level to simplify the annotation task: at the post level, we would have to decide whether two distinct product mentions were actually distinct products or not, which requires heavier domain knowledge. Our approach also resembles the fully token-level annotations of entity and event information in the ACE dataset BIBREF21 .
Evaluation Metrics
In light of the various views on this task and its different requirements for different potential applications, we describe and motivate a few distinct evaluation metrics below. The choice of metric will impact system design, as we discuss in the following sections.
Phrase-level Evaluation
Another axis of variation in metrics comes from whether we consider token-level or phrase-level outputs. As noted in the previous section, we did not annotate noun phrases, but we may actually be interested in identifying them. In Figure FIGREF2 , for example, extracting Backconnect bot is more useful than extracting bot in isolation, since bot is a less specific characterization of the product.
We can convert our token-level annotations to phrase-level annotations by projecting our annotations to the noun phrase level based on the output of an automatic parser. We used the parser of ChenManning2014 to parse all sentences of each post. For each annotated token that was given a nominal tag (N*), we projected that token to the largest NP containing it of length less than or equal to 7; most product NPs are shorter than this, and when the parser predicts a longer NP, our analysis found that it typically reflects a mistake. In Figure FIGREF2 , the entire noun phrase Backconnect bot would be labeled as a product. For products realized as verbs (e.g., hack), we leave the annotation as the single token.
Throughout the rest of this work, we will evaluate sometimes at the token-level and sometimes at the NP-level (including for the product type evaluation and post-level accuracy); we will specify which evaluation is used where.
Models
We consider several baselines for product extraction, two supervised learning-based methods (here), and semi-supervised methods (Section SECREF5 ).
Basic Results
Table TABREF30 shows development set results on Darkode for each of the four systems for each metric described in Section SECREF3 . Our learning-based systems substantially outperform the baselines on the metrics they are optimized for. The post-level system underperforms the binary classifier on the token evaluation, but is superior at not only post-level accuracy but also product type F INLINEFORM0 . This lends credence to our hypothesis that picking one product suffices to characterize a large fraction of posts. Comparing the automatic systems with human annotator performance we see a substantial gap. Note that our best annotator's token F INLINEFORM1 was 89.8, and NP post accuracy was 100%; a careful, well-trained annotator can achieve very high performance, indicating a high skyline.
The noun phrase metric appears to be generally more forgiving, since token distinctions within noun phrases are erased. The post-level NP system achieves an F-score of 78 on product type identification, and post-level accuracy is around 88%. While there is room for improvement, this system is accurate enough to enable analysis of Darkode with automatic annotation.
Throughout the rest of this work, we focus on NP-level evaluation and post-level NP accuracy.
Domain Adaptation
Table TABREF30 only showed results for training and evaluating within the same forum (Darkode). However, we wish to apply our system to extract product occurrences from a wide variety of forums, so we are interested in how well the system will generalize to a new forum. Tables TABREF33 and TABREF38 show full results of several systems in within-forum and cross-forum evaluation settings. Performance is severely degraded in the cross-forum setting compared to the within-forum setting, e.g., on NP-level F INLINEFORM0 , a Hack Forums-trained model is 14.6 F INLINEFORM1 worse at the Darkode task than a Darkode-trained model (61.2 vs. 75.8). Differences in how the systems adapt between different forums will be explored more thoroughly in Section SECREF43 .
In the next few sections, we explore several possible methods for improving results in the cross-forum settings and attempting to build a more domain-general system. These techniques generally reflect two possible hypotheses about the source of the cross-domain challenges:
Brown Clusters
To test Hypothesis 1, we investigate whether additional lexical information helps identify product-like words in new domains. A classic semi-supervised technique for exploiting unlabeled target data is to fire features over word clusters or word vectors BIBREF15 . These features should generalize well across domains that the clusters are formed on: if product nouns occur in similar contexts across domains and therefore wind up in the same cluster, then a model trained on domain-limited data should be able to learn that that cluster identity is indicative of products.
We form Brown clusters on our unlabeled data from both Darkode and Hack Forums (see Table TABREF3 for sizes). We use Liang2005's implementation to learn 50 clusters. Upon inspection, these clusters do indeed capture some of the semantics relevant to the problem: for example, the cluster 110 has as its most frequent members service, account, price, time, crypter, and server, many of which are product-associated nouns. We incorporate these as features into our model by characterizing each token with prefixes of the Brown cluster ID; we used prefixes of length 2, 4, and 6.
Tables TABREF33 and TABREF38 show the results of incorporating Brown cluster features into our trained models. These features do not lead to statistically-significant gains in either NP-level F INLINEFORM0 or post-level accuracy, despite small improvements in some cases. This indicates that Brown clusters might be a useful feature sometimes, but do not solve the domain adaptation problem in this context.
Type-level Annotation
Another approach following Hypothesis 1 is to use small amounts of supervised data, One cheap approach for annotating data in a new domain is to exploit type-level annotation BIBREF12 , BIBREF13 . Our token-level annotation standard is relatively complex to learn, but a researcher could quite easily provide a few exemplar products for a new forum based on just a few minutes of reading posts and analyzing the forum.
Given the data that we've already annotated, we can simulate this process by iterating through our labeled data and collecting annotated product names that are sufficiently common. Specifically, we take all (lowercased, stemmed) product tokens and keep those occurring at least 4 times in the training dataset (recall that these datasets are INLINEFORM0 700 posts). This gives us a list of 121 products in Darkode and 105 products in Hack Forums.
To incorporate this information into our system, we add a new feature on each token indicating whether or not it occurs in the gazetteer. At training time, we use the gazetteer scraped from the training set. At test time, we use the gazetteer from the target domain as a form of partial type-level supervision. Tables TABREF33 and TABREF38 shows the results of incorporating the gazetteer into the system. Gazetteers seem to provide somewhat consistent gains in cross-domain settings, though many of these individual improvements are not statistically significant, and the gazetteers can sometimes hurt performance when testing on the same domain the system was trained on.
Token-level Annotation
We now turn our attention to methods that might address Hypothesis 2. If we assume the domain transfer problem is more complex, we really want to leverage labeled data in the target domain rather than attempting to transfer features based only on type-level information. Specifically, we are interested in cases where a relatively small number of labeled posts (less than 100) might provide substantial benefit to the adaptation; a researcher could plausibly do this annotation in a few hours.
We consider two ways of exploiting labeled target-domain data. The first is to simply take these posts as additional training data. The second is to also employ the “frustratingly easy” domain adaptation method of Daume2007. In this framework, each feature fired in our model is actually fired twice: one copy is domain-general and one is conjoined with the domain label (here, the name of the forum). In doing so, the model should gain some ability to separate domain-general from domain-specific feature values, with regularization encouraging the domain-general feature to explain as much of the phenomenon as possible. For both training methods, we upweight the contribution of the target-domain posts in the objective by a factor of 5.
Figure FIGREF41 shows learning curves for both of these methods in two adaptation settings as we vary the amount of labeled target-domain data. The system trained on Hack Forums is able to make good use of labeled data from Darkode: having access to 20 labeled posts leads to gains of roughly 7 F INLINEFORM0 . Interestingly, the system trained on Darkode is not able to make good use of labeled data from Hack Forums, and the domain-specific features actually cause a drop in performance until we include a substantial amount of data from Hack Forums (at least 80 posts). We are likely overfitting the small Hack Forums training set with the domain-specific features.
Analysis
In order to understand the variable performance and shortcomings of the domain adaptation approaches we explored, it is useful to examine our two initial hypotheses and characterize the datasets a bit further. To do so, we break down system performance on products seen in the training set versus novel products. Because our systems depend on lexical and character INLINEFORM0 -gram features, we expect that they will do better at predicting products we have seen before.
Table TABREF39 confirms this intuition: it shows product out-of-vocabulary rates in each of the four forums relative to training on both Darkode and Hack Forums, along with recall of an NP-level system on both previously seen and OOV products. As expected, performance is substantially higher on in-vocabulary products. OOV rates of a Darkode-trained system are generally lower on new forums, indicating that that forum has better all-around product coverage. A system trained on Darkode is therefore in some sense more domain-general than one trained on Hack Forums.
This would seem to support Hypothesis 1. Moreover, Table TABREF33 shows that the Hack Forums-trained system achieves a 21% error reduction on Hack Forums compared to a Darkode-trained system, while a Darkode-trained system obtains a 38% error reduction on Darkode relative to a Hack Forums-trained system; this greater error reduction means that Darkode has better coverage of Hack Forums than vice versa. Darkode's better product coverage also helps explain why Section SECREF40 showed better performance of adapting Hack Forums to Darkode than the other way around: augmenting Hack Forums data with a few posts from Darkode can give critical knowledge about new products, but this is less true if the forums are reversed. Duplicating features and adding parameters to the learner also has less of a clear benefit when adapting from Darkode, when the types of knowledge that need to be added are less concrete.
Note, however, that these results do not tell the full story. Table TABREF39 reports recall values, but not all systems have the same precision/recall tradeoff: although they were tuned to balance precision and recall on their respective development sets, the Hack Forums-trained system is slightly more precision-oriented on Nulled than the Darkode-trained system. In fact, Table TABREF33 shows that the Hack Forums-trained system actually performs better on Nulled, largely due to better performance on previously-seen products. This indicates that there is some truth to Hypothesis 2: product coverage is not the only important factor determining performance.
Conclusion
We present a new dataset of posts from cybercrime marketplaces annotated with product references, a task which blends IE and NER. Learning-based methods degrade in performance when applied to new forums, and while we explore methods for fine-grained domain adaption in this data, effective methods for this task are still an open question.
Our datasets used in this work are available at https://evidencebasedsecurity.org/forums/ Code for the product extractor can be found at https://github.com/ccied/ugforum-analysis/tree/master/extract-product
Acknowledgments
This work was supported in part by the National Science Foundation under grants CNS-1237265 and CNS-1619620, by the Office of Naval Research under MURI grant N000140911081, by the Center for Long-Term Cybersecurity and by gifts from Google. We thank all the people that provided us with forum data for our analysis; in particular Scraping Hub and SRI for their assistance in collecting data for this study. Any opinions, findings, and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors. | annotators who were not security experts, researchers in either NLP or computer security |
ce807a42370bfca10fa322d6fa772e4a58a8dca1 | ce807a42370bfca10fa322d6fa772e4a58a8dca1_0 | Q: What are the four forums the data comes from?
Text: Introduction
NLP can be extremely useful for enabling scientific inquiry, helping us to quickly and efficiently understand large corpora, gather evidence, and test hypotheses BIBREF0 , BIBREF1 . One domain for which automated analysis is particularly useful is Internet security: researchers obtain large amounts of text data pertinent to active threats or ongoing cybercriminal activity, for which the ability to rapidly characterize that text and draw conclusions can reap major benefits BIBREF2 , BIBREF3 . However, conducting automatic analysis is difficult because this data is out-of-domain for conventional NLP models, which harms the performance of both discrete models BIBREF4 and deep models BIBREF5 . Not only that, we show that data from one cybercrime forum is even out of domain with respect to another cybercrime forum, making this data especially challenging.
In this work, we present the task of identifying products being bought and sold in the marketplace sections of these online cybercrime forums. We define a token-level annotation task where, for each post, we annotate references to the product or products being bought or sold in that post. Having the ability to automatically tag posts in this way lets us characterize the composition of a forum in terms of what products it deals with, identify trends over time, associate users with particular activity profiles, and connect to price information to better understand the marketplace. Some of these analyses only require post-level information (what is the product being bought or sold in this post?) whereas other analyses might require token-level references; we annotate at the token level to make our annotation as general as possible. Our dataset has already proven enabling for case studies on these particular forums BIBREF6 , including a study of marketplace activity on bulk hacked accounts versus users selling their own accounts.
Our task has similarities to both slot-filling information extraction (with provenance information) as well as standard named-entity recognition (NER). Compared to NER, our task features a higher dependence on context: we only care about the specific product being bought or sold in a post, not other products that might be mentioned. Moreover, because we are operating over forums, the data is substantially messier than classical NER corpora like CoNLL BIBREF7 . While prior work has dealt with these messy characteristics for syntax BIBREF8 and for discourse BIBREF9 , BIBREF10 , BIBREF11 , our work is the first to tackle forum data (and marketplace forums specifically) from an information extraction perspective.
Having annotated a dataset, we examine supervised and semi-supervised learning approaches to the product extraction problem. Binary or CRF classification of tokens as products is effective, but performance drops off precipitously when a system trained on one forum is applied to a different forum: in this sense, even two different cybercrime forums seem to represent different “fine-grained domains.” Since we want to avoid having to annotate data for every new forum that might need to be analyzed, we explore several methods for adaptation, mixing type-level annotation BIBREF12 , BIBREF13 , token-level annotation BIBREF14 , and semi-supervised approaches BIBREF15 , BIBREF16 . We find little improvement from these methods and discuss why they fail to have a larger impact.
Overall, our results characterize the challenges of our fine-grained domain adaptation problem in online marketplace data. We believe that this new dataset provides a useful testbed for additional inquiry and investigation into modeling of fine-grained domain differences.
Dataset and Annotation
We consider several forums that vary in the nature of products being traded:
Table TABREF3 gives some statistics of these forums. These are the same forums used to study product activity in PortnoffEtAl2017. We collected all available posts and annotated a subset of them. In total, we annotated 130,336 tokens; accounting for multiple annotators, our annotators considered 478,176 tokens in the process of labeling the data.
Figure FIGREF2 shows two examples of posts from Darkode. In addition to aspects of the annotation, which we describe below, we see that the text exhibits common features of web text: abbreviations, ungrammaticality, spelling errors, and visual formatting, particularly in thread titles. Also, note how some words that are not products here might be in other contexts (e.g., Exploits).
Annotation Process
We developed our annotation guidelines through six preliminary rounds of annotation, covering 560 posts. Each round was followed by discussion and resolution of every post with disagreements. We benefited from members of our team who brought extensive domain expertise to the task. As well as refining the annotation guidelines, the development process trained annotators who were not security experts. The data annotated during this process is not included in Table TABREF3 .
Once we had defined the annotation standard, we annotated datasets from Darkode, Hack Forums, Blackhat, and Nulled as described in Table TABREF3 . Three people annotated every post in the Darkode training, Hack Forums training, Blackhat test, and Nulled test sets; these annotations were then merged into a final annotation by majority vote. The development and test sets for Darkode and Hack Forums were annotated by additional team members (five for Darkode, one for Hack Forums), and then every disagreement was discussed and resolved to produce a final annotation. The authors, who are researchers in either NLP or computer security, did all of the annotation.
We preprocessed the data using the tokenizer and sentence-splitter from the Stanford CoreNLP toolkit BIBREF17 . Note that many sentences in the data are already delimited by line breaks, making the sentence-splitting task much easier. We performed annotation on the tokenized data so that annotations would be consistent with surrounding punctuation and hyphenated words.
Our full annotation guide is available with our data release. Our basic annotation principle is to annotate tokens when they are either the product that will be delivered or are an integral part of the method leading to the delivery of that product. Figure FIGREF2 shows examples of this for a deliverable product (bot) as well as a service (cleaning). Both a product and service may be annotated in a single example: for a post asking to hack an account, hack is the method and the deliverable is the account, so both are annotated. In general, methods expressed as verbs may be annotated in addition to nominal references.
When the product is a multiword expression (e.g., Backconnect bot), it is almost exclusively a noun phrase, in which case we annotate the head word of the noun phrase (bot). Annotating single tokens instead of spans meant that we avoided having to agree on an exact parse of each post, since even the boundaries of base noun phrases can be quite difficult to agree on in ungrammatical text.
If multiple different products are being bought or sold, we annotate them all. We do not annotate:
Features of products
Generic product references, e.g., this, them
Product mentions inside “vouches” (reviews from other users)
Product mentions outside of the first and last 10 lines of each post
Table TABREF3 shows inter-annotator agreement according to our annotation scheme. We use the Fleiss' Kappa measurement BIBREF18 , treating our task as a token-level annotation where every token is annotated as either a product or not. We chose this measure as we are interested in agreement between more than two annotators (ruling out Cohen's kappa), have a binary assignment (ruling out correlation coefficients) and have datasets large enough that the biases Krippendorff's Alpha addresses are not a concern. The values indicate reasonable agreement.
Discussion
Because we annotate entities in a context-sensitive way (i.e., only annotating those in product context), our task resembles a post-level information extraction task. The product information in a post can be thought of as a list-valued slot to be filled in the style of TAC KBP BIBREF19 , BIBREF20 , with the token-level annotations constituting provenance information. However, we chose to anchor the task fully at the token level to simplify the annotation task: at the post level, we would have to decide whether two distinct product mentions were actually distinct products or not, which requires heavier domain knowledge. Our approach also resembles the fully token-level annotations of entity and event information in the ACE dataset BIBREF21 .
Evaluation Metrics
In light of the various views on this task and its different requirements for different potential applications, we describe and motivate a few distinct evaluation metrics below. The choice of metric will impact system design, as we discuss in the following sections.
Phrase-level Evaluation
Another axis of variation in metrics comes from whether we consider token-level or phrase-level outputs. As noted in the previous section, we did not annotate noun phrases, but we may actually be interested in identifying them. In Figure FIGREF2 , for example, extracting Backconnect bot is more useful than extracting bot in isolation, since bot is a less specific characterization of the product.
We can convert our token-level annotations to phrase-level annotations by projecting our annotations to the noun phrase level based on the output of an automatic parser. We used the parser of ChenManning2014 to parse all sentences of each post. For each annotated token that was given a nominal tag (N*), we projected that token to the largest NP containing it of length less than or equal to 7; most product NPs are shorter than this, and when the parser predicts a longer NP, our analysis found that it typically reflects a mistake. In Figure FIGREF2 , the entire noun phrase Backconnect bot would be labeled as a product. For products realized as verbs (e.g., hack), we leave the annotation as the single token.
Throughout the rest of this work, we will evaluate sometimes at the token-level and sometimes at the NP-level (including for the product type evaluation and post-level accuracy); we will specify which evaluation is used where.
Models
We consider several baselines for product extraction, two supervised learning-based methods (here), and semi-supervised methods (Section SECREF5 ).
Basic Results
Table TABREF30 shows development set results on Darkode for each of the four systems for each metric described in Section SECREF3 . Our learning-based systems substantially outperform the baselines on the metrics they are optimized for. The post-level system underperforms the binary classifier on the token evaluation, but is superior at not only post-level accuracy but also product type F INLINEFORM0 . This lends credence to our hypothesis that picking one product suffices to characterize a large fraction of posts. Comparing the automatic systems with human annotator performance we see a substantial gap. Note that our best annotator's token F INLINEFORM1 was 89.8, and NP post accuracy was 100%; a careful, well-trained annotator can achieve very high performance, indicating a high skyline.
The noun phrase metric appears to be generally more forgiving, since token distinctions within noun phrases are erased. The post-level NP system achieves an F-score of 78 on product type identification, and post-level accuracy is around 88%. While there is room for improvement, this system is accurate enough to enable analysis of Darkode with automatic annotation.
Throughout the rest of this work, we focus on NP-level evaluation and post-level NP accuracy.
Domain Adaptation
Table TABREF30 only showed results for training and evaluating within the same forum (Darkode). However, we wish to apply our system to extract product occurrences from a wide variety of forums, so we are interested in how well the system will generalize to a new forum. Tables TABREF33 and TABREF38 show full results of several systems in within-forum and cross-forum evaluation settings. Performance is severely degraded in the cross-forum setting compared to the within-forum setting, e.g., on NP-level F INLINEFORM0 , a Hack Forums-trained model is 14.6 F INLINEFORM1 worse at the Darkode task than a Darkode-trained model (61.2 vs. 75.8). Differences in how the systems adapt between different forums will be explored more thoroughly in Section SECREF43 .
In the next few sections, we explore several possible methods for improving results in the cross-forum settings and attempting to build a more domain-general system. These techniques generally reflect two possible hypotheses about the source of the cross-domain challenges:
Brown Clusters
To test Hypothesis 1, we investigate whether additional lexical information helps identify product-like words in new domains. A classic semi-supervised technique for exploiting unlabeled target data is to fire features over word clusters or word vectors BIBREF15 . These features should generalize well across domains that the clusters are formed on: if product nouns occur in similar contexts across domains and therefore wind up in the same cluster, then a model trained on domain-limited data should be able to learn that that cluster identity is indicative of products.
We form Brown clusters on our unlabeled data from both Darkode and Hack Forums (see Table TABREF3 for sizes). We use Liang2005's implementation to learn 50 clusters. Upon inspection, these clusters do indeed capture some of the semantics relevant to the problem: for example, the cluster 110 has as its most frequent members service, account, price, time, crypter, and server, many of which are product-associated nouns. We incorporate these as features into our model by characterizing each token with prefixes of the Brown cluster ID; we used prefixes of length 2, 4, and 6.
Tables TABREF33 and TABREF38 show the results of incorporating Brown cluster features into our trained models. These features do not lead to statistically-significant gains in either NP-level F INLINEFORM0 or post-level accuracy, despite small improvements in some cases. This indicates that Brown clusters might be a useful feature sometimes, but do not solve the domain adaptation problem in this context.
Type-level Annotation
Another approach following Hypothesis 1 is to use small amounts of supervised data, One cheap approach for annotating data in a new domain is to exploit type-level annotation BIBREF12 , BIBREF13 . Our token-level annotation standard is relatively complex to learn, but a researcher could quite easily provide a few exemplar products for a new forum based on just a few minutes of reading posts and analyzing the forum.
Given the data that we've already annotated, we can simulate this process by iterating through our labeled data and collecting annotated product names that are sufficiently common. Specifically, we take all (lowercased, stemmed) product tokens and keep those occurring at least 4 times in the training dataset (recall that these datasets are INLINEFORM0 700 posts). This gives us a list of 121 products in Darkode and 105 products in Hack Forums.
To incorporate this information into our system, we add a new feature on each token indicating whether or not it occurs in the gazetteer. At training time, we use the gazetteer scraped from the training set. At test time, we use the gazetteer from the target domain as a form of partial type-level supervision. Tables TABREF33 and TABREF38 shows the results of incorporating the gazetteer into the system. Gazetteers seem to provide somewhat consistent gains in cross-domain settings, though many of these individual improvements are not statistically significant, and the gazetteers can sometimes hurt performance when testing on the same domain the system was trained on.
Token-level Annotation
We now turn our attention to methods that might address Hypothesis 2. If we assume the domain transfer problem is more complex, we really want to leverage labeled data in the target domain rather than attempting to transfer features based only on type-level information. Specifically, we are interested in cases where a relatively small number of labeled posts (less than 100) might provide substantial benefit to the adaptation; a researcher could plausibly do this annotation in a few hours.
We consider two ways of exploiting labeled target-domain data. The first is to simply take these posts as additional training data. The second is to also employ the “frustratingly easy” domain adaptation method of Daume2007. In this framework, each feature fired in our model is actually fired twice: one copy is domain-general and one is conjoined with the domain label (here, the name of the forum). In doing so, the model should gain some ability to separate domain-general from domain-specific feature values, with regularization encouraging the domain-general feature to explain as much of the phenomenon as possible. For both training methods, we upweight the contribution of the target-domain posts in the objective by a factor of 5.
Figure FIGREF41 shows learning curves for both of these methods in two adaptation settings as we vary the amount of labeled target-domain data. The system trained on Hack Forums is able to make good use of labeled data from Darkode: having access to 20 labeled posts leads to gains of roughly 7 F INLINEFORM0 . Interestingly, the system trained on Darkode is not able to make good use of labeled data from Hack Forums, and the domain-specific features actually cause a drop in performance until we include a substantial amount of data from Hack Forums (at least 80 posts). We are likely overfitting the small Hack Forums training set with the domain-specific features.
Analysis
In order to understand the variable performance and shortcomings of the domain adaptation approaches we explored, it is useful to examine our two initial hypotheses and characterize the datasets a bit further. To do so, we break down system performance on products seen in the training set versus novel products. Because our systems depend on lexical and character INLINEFORM0 -gram features, we expect that they will do better at predicting products we have seen before.
Table TABREF39 confirms this intuition: it shows product out-of-vocabulary rates in each of the four forums relative to training on both Darkode and Hack Forums, along with recall of an NP-level system on both previously seen and OOV products. As expected, performance is substantially higher on in-vocabulary products. OOV rates of a Darkode-trained system are generally lower on new forums, indicating that that forum has better all-around product coverage. A system trained on Darkode is therefore in some sense more domain-general than one trained on Hack Forums.
This would seem to support Hypothesis 1. Moreover, Table TABREF33 shows that the Hack Forums-trained system achieves a 21% error reduction on Hack Forums compared to a Darkode-trained system, while a Darkode-trained system obtains a 38% error reduction on Darkode relative to a Hack Forums-trained system; this greater error reduction means that Darkode has better coverage of Hack Forums than vice versa. Darkode's better product coverage also helps explain why Section SECREF40 showed better performance of adapting Hack Forums to Darkode than the other way around: augmenting Hack Forums data with a few posts from Darkode can give critical knowledge about new products, but this is less true if the forums are reversed. Duplicating features and adding parameters to the learner also has less of a clear benefit when adapting from Darkode, when the types of knowledge that need to be added are less concrete.
Note, however, that these results do not tell the full story. Table TABREF39 reports recall values, but not all systems have the same precision/recall tradeoff: although they were tuned to balance precision and recall on their respective development sets, the Hack Forums-trained system is slightly more precision-oriented on Nulled than the Darkode-trained system. In fact, Table TABREF33 shows that the Hack Forums-trained system actually performs better on Nulled, largely due to better performance on previously-seen products. This indicates that there is some truth to Hypothesis 2: product coverage is not the only important factor determining performance.
Conclusion
We present a new dataset of posts from cybercrime marketplaces annotated with product references, a task which blends IE and NER. Learning-based methods degrade in performance when applied to new forums, and while we explore methods for fine-grained domain adaption in this data, effective methods for this task are still an open question.
Our datasets used in this work are available at https://evidencebasedsecurity.org/forums/ Code for the product extractor can be found at https://github.com/ccied/ugforum-analysis/tree/master/extract-product
Acknowledgments
This work was supported in part by the National Science Foundation under grants CNS-1237265 and CNS-1619620, by the Office of Naval Research under MURI grant N000140911081, by the Center for Long-Term Cybersecurity and by gifts from Google. We thank all the people that provided us with forum data for our analysis; in particular Scraping Hub and SRI for their assistance in collecting data for this study. Any opinions, findings, and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors. | Darkode, Hack Forums, Blackhat and Nulled. |
f91835f17c0086baec65ebd99d12326ae1ae87d2 | f91835f17c0086baec65ebd99d12326ae1ae87d2_0 | Q: How do they obtain parsed source sentences?
Text: Introduction
Neural machine translation (NMT) typically makes use of a recurrent neural network (RNN) -based encoder and decoder, along with an attention mechanism BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . However, it has been shown that RNNs require some supervision to learn syntax BIBREF4 , BIBREF5 , BIBREF6 . Therefore, explicitly incorporating syntactic information into NMT has the potential to improve performance. This is particularly true for source syntax, which can improve the model's representation of the source language.
Recently, there have been a number of proposals for using linearized representations of parses within standard NMT BIBREF7 , BIBREF8 , BIBREF9 . Linearized parses are advantageous because they can inject syntactic information into the models without significant changes to the architecture. However, using linearized parses in a sequence-to-sequence (seq2seq) framework creates some challenges, particularly when using source parses. First, the parsed sequences are significantly longer than standard sentences, since they contain node labels as well as words. Second, these systems often fail when the source sentence is not parsed. This can be a problem for inference, since the external parser may fail on an input sentence at test time. We propose a method for incorporating linearized source parses into NMT that addresses these challenges by taking both the sequential source sentence and its linearized parse simultaneously as input in a multi-source framework. Thus, the model is able to use the syntactic information encoded in the parse while falling back to the sequential sentence when necessary. Our proposed model improves over both standard and parsed NMT baselines.
Seq2seq Neural Parsing
Using linearized parse trees within sequential frameworks was first done in the context of neural parsing. vinyals2015grammar parsed using an attentional seq2seq model; they used linearized, unlexicalized parse trees on the target side and sentences on the source side. In addition, as in this work, they used an external parser to create synthetic parsed training data, resulting in improved parsing performance. choe2016parsing adopted a similar strategy, using linearized parses in an RNN language modeling framework.
NMT with Source Syntax
Among the first proposals for using source syntax in NMT was that of luong2015multi, who introduced a multi-task system in which the source data was parsed and translated using a shared encoder and two decoders. More radical changes to the standard NMT paradigm have also been proposed. eriguchi2016tree introduced tree-to-sequence NMT; this model took parse trees as input using a tree-LSTM BIBREF10 encoder. bastings2017graph used a graph convolutional encoder in order to take labeled dependency parses of the source sentences into account. hashimoto2017neural added a latent graph parser to the encoder, allowing it to learn soft dependency parses while simultaneously learning to translate.
Linearized Parse Trees in NMT
The idea of incorporating linearized parses into seq2seq has been adapted to NMT as a means of injecting syntax. aharoni2017towards first did this by parsing the target side of the training data and training the system to generate parsed translations of the source input; this is the inverse of our parse2seq baseline. Similarly, nadejde2017syntax interleaved CCG supertags with words on the target side, finding that this improved translation despite requiring longer sequences.
Most similar to our multi-source model is the parallel RNN model proposed by li2017modeling. Like multi-source, the parallel RNN used two encoders, one for words and the other for syntax. However, they combined these representations at the word level, whereas we combine them on the sentence level. Their mixed RNN model is also similar to our parse2seq baseline, although the mixed RNN decoder attended only to words. As the mixed RNN model outperformed the parallel RNN model, we do not attempt to compare our model to parallel RNN. These models are similar to ours in that they incorporate linearized parses into NMT; here, we utilize a multi-source framework.
Multi-Source NMT
Multi-source methods in neural machine translation were first introduced by zoph2016multi for multilingual translation. They used one encoder per source language, and combined the resulting sentence representations before feeding them into the decoder. firat2016multi expanded on this by creating a multilingual NMT system with multiple encoders and decoders. libovicky2017attention applied multi-source NMT to multimodal translation and automatic post-editing and explored different strategies for combining attention over the two sources. In this paper, we apply the multi-source framework to a novel task, syntactic neural machine translation.
NMT with Linearized Source Parses
We propose a multi-source method for incorporating source syntax into NMT. This method makes use of linearized source parses; we describe these parses in section SECREF5 . Throughout this paper, we refer to standard sentences that do not contain any explicit syntactic information as sequential; see Table TABREF6 for an example.
Linearized Source Parses
We use an off-the-shelf parser, in this case Stanford CoreNLP BIBREF11 , to create binary constituency parses. These parses are linearized as shown in Table TABREF6 . We tokenize the opening parentheses with the node label (so each node label begins with a parenthesis) but keep the closing parentheses separate from the words they follow. For our task, the parser failed on one training sentence of 5.9 million, which we discarded, and succeeded on all test sentences. It took roughly 16 hours to parse the 5.9 million training sentences.
Following sennrich2015neural, our networks operate at the subword level using byte pair encoding (BPE) with a shared vocabulary on the source and target sides. However, the parser operates at the word level. Therefore, we parse then break into subwords, so a leaf may have multiple tokens without internal structure.
The proposed method is tested using both lexicalized and unlexicalized parses. In unlexicalized parses, we remove the words, keeping only the node labels and the parentheses. In lexicalized parses, the words are included. Table TABREF6 shows an example of the three source sentence formats: sequential, lexicalized parse, and unlexicalized parse. Note that the lexicalized parse is significantly longer than the other versions.
Multi-Source
We propose a multi-source framework for injecting linearized source parses into NMT. This model consists of two identical RNN encoders with no shared parameters, as well as a standard RNN decoder. For each target sentence, two versions of the source sentence are used: the sequential (standard) version and the linearized parse (lexicalized or unlexicalized). Each of these is encoded simultaneously using the encoders; the encodings are then combined and used as input to the decoder. We combine the source encodings using the hierarchical attention combination proposed by libovicky2017attention. This consists of a separate attention mechanism for each encoder; these are then combined using an additional attention mechanism over the two separate context vectors. This multi-source method is thus able to combine the advantages of both standard RNN-based encodings and syntactic encodings.
Data
We base our experiments on the WMT17 BIBREF12 English (EN) INLINEFORM0 German (DE) news translation task. All 5.9 million parallel training sentences are used, but no monolingual data. Validation is done on newstest2015, while newstest2016 and newstest2017 are used for testing.
We train a shared BPE vocabulary with 60k merge operations on the parallel training data. For the parsed data, we break words into subwords after applying the Stanford parser. We tokenize and truecase the data using the Moses tokenizer and truecaser BIBREF13 .
Implementation
The models are implemented in Neural Monkey BIBREF14 . They are trained using Adam BIBREF15 and have minibatch size 40, RNN size 512, and dropout probability 0.2 BIBREF16 . We train to convergence on the validation set, using BLEU BIBREF17 as the metric.
For sequential inputs and outputs, the maximum sentence length is 50 subwords. For parsed inputs, we increase maximum sentence length to 150 subwords to account for the increased length due to the parsing labels; we still use a maximum output length of 50 subwords for these systems.
Baselines
The proposed models are compared against two baselines. The first, referred to here as seq2seq, is the standard RNN-based neural machine translation system with attention BIBREF0 . This baseline does not use the parsed data.
The second baseline we consider is a slight modification of the mixed RNN model proposed by li2017modeling. This uses an identical architecture to the seq2seq baseline (except for a longer maximum sentence length in the encoder). Instead of using sequential data on the source side, the linearized parses are used. We allow the system to attend equally to words and node labels on the source side, rather than restricting the attention to words. We refer to this baseline as parse2seq.
Results
Table TABREF11 shows the performance on EN INLINEFORM0 DE translation for each of the proposed systems and the baselines, as approximated by BLEU score.
The multi-source systems improve strongly over both baselines, with improvements of up to 1.5 BLEU over the seq2seq baseline and up to 1.1 BLEU over the parse2seq baseline. In addition, the lexicalized multi-source systems yields slightly higher BLEU scores than the unlexicalized multi-source systems; this is surprising because the lexicalized systems have significantly longer sequences than the unlexicalized ones. Finally, it is interesting to compare the seq2seq and parse2seq baselines. Parse2seq outperforms seq2seq by only a small amount compared to multi-source; thus, while adding syntax to NMT can be helpful, some ways of doing so are more effective than others.
Inference Without Parsed Sentences
The parse2seq and multi-source systems require parsed source data at inference time. However, the parser may fail on an input sentence. Therefore, we examine how well these systems do when given only unparsed source sentences at test time.
Table TABREF13 displays the results of these experiments. For the parse2seq baseline, we use only sequential (seq) data as input. For the lexicalized and unlexicalized multi-source systems, two options are considered: seq + seq uses identical sequential data as input to both encoders, while seq + null uses null input for the parsed encoder, where every source sentence is “( )”.
The parse2seq system fails when given only sequential source data. On the other hand, both multi-source systems perform reasonably well without parsed data, although the BLEU scores are worse than multi-source with parsed data.
BLEU by Sentence Length
For models that use source-side linearized parses (multi-source and parse2seq), the source sequences are significantly longer than for the seq2seq baseline. Since NMT already performs relatively poorly on long sentences BIBREF0 , adding linearized source parses may exacerbate this issue. To detect whether this occurs, we calculate BLEU by sentence length.
We bucket the sentences in newstest2017 by source sentence length. We then compute BLEU scores for each bucket for the seq2seq and parse2seq baselines and the lexicalized multi-source system. The results are in Figure FIGREF15 .
In line with previous work on NMT on long sentences BIBREF0 , BIBREF8 , we see a significant deterioration in BLEU for longer sentences for all systems. In particular, although the parse2seq model outperformed the seq2seq model overall, it does worse than seq2seq for sentences containing more than 30 words. This indicates that parse2seq performance does indeed suffer due to its long sequences. On the other hand, the multi-source system outperforms the seq2seq baseline for all sentence lengths and does particularly well for sentences with over 50 words. This may be because the multi-source system has both sequential and parsed input, so it can rely more on sequential input for very long sentences.
Conclusion
In this paper, we presented a multi-source method for effectively incorporating linearized parses of the source data into neural machine translation. This method, in which the parsed and sequential versions of the sentence were both taken as input during training and inference, resulted in gains of up to 1.5 BLEU on EN INLINEFORM0 DE translation. In addition, unlike parse2seq, the multi-source model translated reasonably well even when the source sentence was not parsed.
In the future, we will explore adding back-translated BIBREF18 or copied BIBREF19 target data to our multi-source system. The multi-source model does not require all training data to be parsed; thus, monolingual data can be used even if the parser is unreliable for the synthetic or copied source sentences.
Acknowledgments
This work was funded by the Amazon Academic Research Awards program. | Stanford CoreNLP BIBREF11 |
14e78db206a8180ea637774aa572b073e3ffa219 | 14e78db206a8180ea637774aa572b073e3ffa219_0 | Q: What kind of encoders are used for the parsed source sentence?
Text: Introduction
Neural machine translation (NMT) typically makes use of a recurrent neural network (RNN) -based encoder and decoder, along with an attention mechanism BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . However, it has been shown that RNNs require some supervision to learn syntax BIBREF4 , BIBREF5 , BIBREF6 . Therefore, explicitly incorporating syntactic information into NMT has the potential to improve performance. This is particularly true for source syntax, which can improve the model's representation of the source language.
Recently, there have been a number of proposals for using linearized representations of parses within standard NMT BIBREF7 , BIBREF8 , BIBREF9 . Linearized parses are advantageous because they can inject syntactic information into the models without significant changes to the architecture. However, using linearized parses in a sequence-to-sequence (seq2seq) framework creates some challenges, particularly when using source parses. First, the parsed sequences are significantly longer than standard sentences, since they contain node labels as well as words. Second, these systems often fail when the source sentence is not parsed. This can be a problem for inference, since the external parser may fail on an input sentence at test time. We propose a method for incorporating linearized source parses into NMT that addresses these challenges by taking both the sequential source sentence and its linearized parse simultaneously as input in a multi-source framework. Thus, the model is able to use the syntactic information encoded in the parse while falling back to the sequential sentence when necessary. Our proposed model improves over both standard and parsed NMT baselines.
Seq2seq Neural Parsing
Using linearized parse trees within sequential frameworks was first done in the context of neural parsing. vinyals2015grammar parsed using an attentional seq2seq model; they used linearized, unlexicalized parse trees on the target side and sentences on the source side. In addition, as in this work, they used an external parser to create synthetic parsed training data, resulting in improved parsing performance. choe2016parsing adopted a similar strategy, using linearized parses in an RNN language modeling framework.
NMT with Source Syntax
Among the first proposals for using source syntax in NMT was that of luong2015multi, who introduced a multi-task system in which the source data was parsed and translated using a shared encoder and two decoders. More radical changes to the standard NMT paradigm have also been proposed. eriguchi2016tree introduced tree-to-sequence NMT; this model took parse trees as input using a tree-LSTM BIBREF10 encoder. bastings2017graph used a graph convolutional encoder in order to take labeled dependency parses of the source sentences into account. hashimoto2017neural added a latent graph parser to the encoder, allowing it to learn soft dependency parses while simultaneously learning to translate.
Linearized Parse Trees in NMT
The idea of incorporating linearized parses into seq2seq has been adapted to NMT as a means of injecting syntax. aharoni2017towards first did this by parsing the target side of the training data and training the system to generate parsed translations of the source input; this is the inverse of our parse2seq baseline. Similarly, nadejde2017syntax interleaved CCG supertags with words on the target side, finding that this improved translation despite requiring longer sequences.
Most similar to our multi-source model is the parallel RNN model proposed by li2017modeling. Like multi-source, the parallel RNN used two encoders, one for words and the other for syntax. However, they combined these representations at the word level, whereas we combine them on the sentence level. Their mixed RNN model is also similar to our parse2seq baseline, although the mixed RNN decoder attended only to words. As the mixed RNN model outperformed the parallel RNN model, we do not attempt to compare our model to parallel RNN. These models are similar to ours in that they incorporate linearized parses into NMT; here, we utilize a multi-source framework.
Multi-Source NMT
Multi-source methods in neural machine translation were first introduced by zoph2016multi for multilingual translation. They used one encoder per source language, and combined the resulting sentence representations before feeding them into the decoder. firat2016multi expanded on this by creating a multilingual NMT system with multiple encoders and decoders. libovicky2017attention applied multi-source NMT to multimodal translation and automatic post-editing and explored different strategies for combining attention over the two sources. In this paper, we apply the multi-source framework to a novel task, syntactic neural machine translation.
NMT with Linearized Source Parses
We propose a multi-source method for incorporating source syntax into NMT. This method makes use of linearized source parses; we describe these parses in section SECREF5 . Throughout this paper, we refer to standard sentences that do not contain any explicit syntactic information as sequential; see Table TABREF6 for an example.
Linearized Source Parses
We use an off-the-shelf parser, in this case Stanford CoreNLP BIBREF11 , to create binary constituency parses. These parses are linearized as shown in Table TABREF6 . We tokenize the opening parentheses with the node label (so each node label begins with a parenthesis) but keep the closing parentheses separate from the words they follow. For our task, the parser failed on one training sentence of 5.9 million, which we discarded, and succeeded on all test sentences. It took roughly 16 hours to parse the 5.9 million training sentences.
Following sennrich2015neural, our networks operate at the subword level using byte pair encoding (BPE) with a shared vocabulary on the source and target sides. However, the parser operates at the word level. Therefore, we parse then break into subwords, so a leaf may have multiple tokens without internal structure.
The proposed method is tested using both lexicalized and unlexicalized parses. In unlexicalized parses, we remove the words, keeping only the node labels and the parentheses. In lexicalized parses, the words are included. Table TABREF6 shows an example of the three source sentence formats: sequential, lexicalized parse, and unlexicalized parse. Note that the lexicalized parse is significantly longer than the other versions.
Multi-Source
We propose a multi-source framework for injecting linearized source parses into NMT. This model consists of two identical RNN encoders with no shared parameters, as well as a standard RNN decoder. For each target sentence, two versions of the source sentence are used: the sequential (standard) version and the linearized parse (lexicalized or unlexicalized). Each of these is encoded simultaneously using the encoders; the encodings are then combined and used as input to the decoder. We combine the source encodings using the hierarchical attention combination proposed by libovicky2017attention. This consists of a separate attention mechanism for each encoder; these are then combined using an additional attention mechanism over the two separate context vectors. This multi-source method is thus able to combine the advantages of both standard RNN-based encodings and syntactic encodings.
Data
We base our experiments on the WMT17 BIBREF12 English (EN) INLINEFORM0 German (DE) news translation task. All 5.9 million parallel training sentences are used, but no monolingual data. Validation is done on newstest2015, while newstest2016 and newstest2017 are used for testing.
We train a shared BPE vocabulary with 60k merge operations on the parallel training data. For the parsed data, we break words into subwords after applying the Stanford parser. We tokenize and truecase the data using the Moses tokenizer and truecaser BIBREF13 .
Implementation
The models are implemented in Neural Monkey BIBREF14 . They are trained using Adam BIBREF15 and have minibatch size 40, RNN size 512, and dropout probability 0.2 BIBREF16 . We train to convergence on the validation set, using BLEU BIBREF17 as the metric.
For sequential inputs and outputs, the maximum sentence length is 50 subwords. For parsed inputs, we increase maximum sentence length to 150 subwords to account for the increased length due to the parsing labels; we still use a maximum output length of 50 subwords for these systems.
Baselines
The proposed models are compared against two baselines. The first, referred to here as seq2seq, is the standard RNN-based neural machine translation system with attention BIBREF0 . This baseline does not use the parsed data.
The second baseline we consider is a slight modification of the mixed RNN model proposed by li2017modeling. This uses an identical architecture to the seq2seq baseline (except for a longer maximum sentence length in the encoder). Instead of using sequential data on the source side, the linearized parses are used. We allow the system to attend equally to words and node labels on the source side, rather than restricting the attention to words. We refer to this baseline as parse2seq.
Results
Table TABREF11 shows the performance on EN INLINEFORM0 DE translation for each of the proposed systems and the baselines, as approximated by BLEU score.
The multi-source systems improve strongly over both baselines, with improvements of up to 1.5 BLEU over the seq2seq baseline and up to 1.1 BLEU over the parse2seq baseline. In addition, the lexicalized multi-source systems yields slightly higher BLEU scores than the unlexicalized multi-source systems; this is surprising because the lexicalized systems have significantly longer sequences than the unlexicalized ones. Finally, it is interesting to compare the seq2seq and parse2seq baselines. Parse2seq outperforms seq2seq by only a small amount compared to multi-source; thus, while adding syntax to NMT can be helpful, some ways of doing so are more effective than others.
Inference Without Parsed Sentences
The parse2seq and multi-source systems require parsed source data at inference time. However, the parser may fail on an input sentence. Therefore, we examine how well these systems do when given only unparsed source sentences at test time.
Table TABREF13 displays the results of these experiments. For the parse2seq baseline, we use only sequential (seq) data as input. For the lexicalized and unlexicalized multi-source systems, two options are considered: seq + seq uses identical sequential data as input to both encoders, while seq + null uses null input for the parsed encoder, where every source sentence is “( )”.
The parse2seq system fails when given only sequential source data. On the other hand, both multi-source systems perform reasonably well without parsed data, although the BLEU scores are worse than multi-source with parsed data.
BLEU by Sentence Length
For models that use source-side linearized parses (multi-source and parse2seq), the source sequences are significantly longer than for the seq2seq baseline. Since NMT already performs relatively poorly on long sentences BIBREF0 , adding linearized source parses may exacerbate this issue. To detect whether this occurs, we calculate BLEU by sentence length.
We bucket the sentences in newstest2017 by source sentence length. We then compute BLEU scores for each bucket for the seq2seq and parse2seq baselines and the lexicalized multi-source system. The results are in Figure FIGREF15 .
In line with previous work on NMT on long sentences BIBREF0 , BIBREF8 , we see a significant deterioration in BLEU for longer sentences for all systems. In particular, although the parse2seq model outperformed the seq2seq model overall, it does worse than seq2seq for sentences containing more than 30 words. This indicates that parse2seq performance does indeed suffer due to its long sequences. On the other hand, the multi-source system outperforms the seq2seq baseline for all sentence lengths and does particularly well for sentences with over 50 words. This may be because the multi-source system has both sequential and parsed input, so it can rely more on sequential input for very long sentences.
Conclusion
In this paper, we presented a multi-source method for effectively incorporating linearized parses of the source data into neural machine translation. This method, in which the parsed and sequential versions of the sentence were both taken as input during training and inference, resulted in gains of up to 1.5 BLEU on EN INLINEFORM0 DE translation. In addition, unlike parse2seq, the multi-source model translated reasonably well even when the source sentence was not parsed.
In the future, we will explore adding back-translated BIBREF18 or copied BIBREF19 target data to our multi-source system. The multi-source model does not require all training data to be parsed; thus, monolingual data can be used even if the parser is unreliable for the synthetic or copied source sentences.
Acknowledgments
This work was funded by the Amazon Academic Research Awards program. | RNN encoders |
bc1e3f67d607bfc7c4c56d6b9763d3ae7f56ad5b | bc1e3f67d607bfc7c4c56d6b9763d3ae7f56ad5b_0 | Q: Whas is the performance drop of their model when there is no parsed input?
Text: Introduction
Neural machine translation (NMT) typically makes use of a recurrent neural network (RNN) -based encoder and decoder, along with an attention mechanism BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . However, it has been shown that RNNs require some supervision to learn syntax BIBREF4 , BIBREF5 , BIBREF6 . Therefore, explicitly incorporating syntactic information into NMT has the potential to improve performance. This is particularly true for source syntax, which can improve the model's representation of the source language.
Recently, there have been a number of proposals for using linearized representations of parses within standard NMT BIBREF7 , BIBREF8 , BIBREF9 . Linearized parses are advantageous because they can inject syntactic information into the models without significant changes to the architecture. However, using linearized parses in a sequence-to-sequence (seq2seq) framework creates some challenges, particularly when using source parses. First, the parsed sequences are significantly longer than standard sentences, since they contain node labels as well as words. Second, these systems often fail when the source sentence is not parsed. This can be a problem for inference, since the external parser may fail on an input sentence at test time. We propose a method for incorporating linearized source parses into NMT that addresses these challenges by taking both the sequential source sentence and its linearized parse simultaneously as input in a multi-source framework. Thus, the model is able to use the syntactic information encoded in the parse while falling back to the sequential sentence when necessary. Our proposed model improves over both standard and parsed NMT baselines.
Seq2seq Neural Parsing
Using linearized parse trees within sequential frameworks was first done in the context of neural parsing. vinyals2015grammar parsed using an attentional seq2seq model; they used linearized, unlexicalized parse trees on the target side and sentences on the source side. In addition, as in this work, they used an external parser to create synthetic parsed training data, resulting in improved parsing performance. choe2016parsing adopted a similar strategy, using linearized parses in an RNN language modeling framework.
NMT with Source Syntax
Among the first proposals for using source syntax in NMT was that of luong2015multi, who introduced a multi-task system in which the source data was parsed and translated using a shared encoder and two decoders. More radical changes to the standard NMT paradigm have also been proposed. eriguchi2016tree introduced tree-to-sequence NMT; this model took parse trees as input using a tree-LSTM BIBREF10 encoder. bastings2017graph used a graph convolutional encoder in order to take labeled dependency parses of the source sentences into account. hashimoto2017neural added a latent graph parser to the encoder, allowing it to learn soft dependency parses while simultaneously learning to translate.
Linearized Parse Trees in NMT
The idea of incorporating linearized parses into seq2seq has been adapted to NMT as a means of injecting syntax. aharoni2017towards first did this by parsing the target side of the training data and training the system to generate parsed translations of the source input; this is the inverse of our parse2seq baseline. Similarly, nadejde2017syntax interleaved CCG supertags with words on the target side, finding that this improved translation despite requiring longer sequences.
Most similar to our multi-source model is the parallel RNN model proposed by li2017modeling. Like multi-source, the parallel RNN used two encoders, one for words and the other for syntax. However, they combined these representations at the word level, whereas we combine them on the sentence level. Their mixed RNN model is also similar to our parse2seq baseline, although the mixed RNN decoder attended only to words. As the mixed RNN model outperformed the parallel RNN model, we do not attempt to compare our model to parallel RNN. These models are similar to ours in that they incorporate linearized parses into NMT; here, we utilize a multi-source framework.
Multi-Source NMT
Multi-source methods in neural machine translation were first introduced by zoph2016multi for multilingual translation. They used one encoder per source language, and combined the resulting sentence representations before feeding them into the decoder. firat2016multi expanded on this by creating a multilingual NMT system with multiple encoders and decoders. libovicky2017attention applied multi-source NMT to multimodal translation and automatic post-editing and explored different strategies for combining attention over the two sources. In this paper, we apply the multi-source framework to a novel task, syntactic neural machine translation.
NMT with Linearized Source Parses
We propose a multi-source method for incorporating source syntax into NMT. This method makes use of linearized source parses; we describe these parses in section SECREF5 . Throughout this paper, we refer to standard sentences that do not contain any explicit syntactic information as sequential; see Table TABREF6 for an example.
Linearized Source Parses
We use an off-the-shelf parser, in this case Stanford CoreNLP BIBREF11 , to create binary constituency parses. These parses are linearized as shown in Table TABREF6 . We tokenize the opening parentheses with the node label (so each node label begins with a parenthesis) but keep the closing parentheses separate from the words they follow. For our task, the parser failed on one training sentence of 5.9 million, which we discarded, and succeeded on all test sentences. It took roughly 16 hours to parse the 5.9 million training sentences.
Following sennrich2015neural, our networks operate at the subword level using byte pair encoding (BPE) with a shared vocabulary on the source and target sides. However, the parser operates at the word level. Therefore, we parse then break into subwords, so a leaf may have multiple tokens without internal structure.
The proposed method is tested using both lexicalized and unlexicalized parses. In unlexicalized parses, we remove the words, keeping only the node labels and the parentheses. In lexicalized parses, the words are included. Table TABREF6 shows an example of the three source sentence formats: sequential, lexicalized parse, and unlexicalized parse. Note that the lexicalized parse is significantly longer than the other versions.
Multi-Source
We propose a multi-source framework for injecting linearized source parses into NMT. This model consists of two identical RNN encoders with no shared parameters, as well as a standard RNN decoder. For each target sentence, two versions of the source sentence are used: the sequential (standard) version and the linearized parse (lexicalized or unlexicalized). Each of these is encoded simultaneously using the encoders; the encodings are then combined and used as input to the decoder. We combine the source encodings using the hierarchical attention combination proposed by libovicky2017attention. This consists of a separate attention mechanism for each encoder; these are then combined using an additional attention mechanism over the two separate context vectors. This multi-source method is thus able to combine the advantages of both standard RNN-based encodings and syntactic encodings.
Data
We base our experiments on the WMT17 BIBREF12 English (EN) INLINEFORM0 German (DE) news translation task. All 5.9 million parallel training sentences are used, but no monolingual data. Validation is done on newstest2015, while newstest2016 and newstest2017 are used for testing.
We train a shared BPE vocabulary with 60k merge operations on the parallel training data. For the parsed data, we break words into subwords after applying the Stanford parser. We tokenize and truecase the data using the Moses tokenizer and truecaser BIBREF13 .
Implementation
The models are implemented in Neural Monkey BIBREF14 . They are trained using Adam BIBREF15 and have minibatch size 40, RNN size 512, and dropout probability 0.2 BIBREF16 . We train to convergence on the validation set, using BLEU BIBREF17 as the metric.
For sequential inputs and outputs, the maximum sentence length is 50 subwords. For parsed inputs, we increase maximum sentence length to 150 subwords to account for the increased length due to the parsing labels; we still use a maximum output length of 50 subwords for these systems.
Baselines
The proposed models are compared against two baselines. The first, referred to here as seq2seq, is the standard RNN-based neural machine translation system with attention BIBREF0 . This baseline does not use the parsed data.
The second baseline we consider is a slight modification of the mixed RNN model proposed by li2017modeling. This uses an identical architecture to the seq2seq baseline (except for a longer maximum sentence length in the encoder). Instead of using sequential data on the source side, the linearized parses are used. We allow the system to attend equally to words and node labels on the source side, rather than restricting the attention to words. We refer to this baseline as parse2seq.
Results
Table TABREF11 shows the performance on EN INLINEFORM0 DE translation for each of the proposed systems and the baselines, as approximated by BLEU score.
The multi-source systems improve strongly over both baselines, with improvements of up to 1.5 BLEU over the seq2seq baseline and up to 1.1 BLEU over the parse2seq baseline. In addition, the lexicalized multi-source systems yields slightly higher BLEU scores than the unlexicalized multi-source systems; this is surprising because the lexicalized systems have significantly longer sequences than the unlexicalized ones. Finally, it is interesting to compare the seq2seq and parse2seq baselines. Parse2seq outperforms seq2seq by only a small amount compared to multi-source; thus, while adding syntax to NMT can be helpful, some ways of doing so are more effective than others.
Inference Without Parsed Sentences
The parse2seq and multi-source systems require parsed source data at inference time. However, the parser may fail on an input sentence. Therefore, we examine how well these systems do when given only unparsed source sentences at test time.
Table TABREF13 displays the results of these experiments. For the parse2seq baseline, we use only sequential (seq) data as input. For the lexicalized and unlexicalized multi-source systems, two options are considered: seq + seq uses identical sequential data as input to both encoders, while seq + null uses null input for the parsed encoder, where every source sentence is “( )”.
The parse2seq system fails when given only sequential source data. On the other hand, both multi-source systems perform reasonably well without parsed data, although the BLEU scores are worse than multi-source with parsed data.
BLEU by Sentence Length
For models that use source-side linearized parses (multi-source and parse2seq), the source sequences are significantly longer than for the seq2seq baseline. Since NMT already performs relatively poorly on long sentences BIBREF0 , adding linearized source parses may exacerbate this issue. To detect whether this occurs, we calculate BLEU by sentence length.
We bucket the sentences in newstest2017 by source sentence length. We then compute BLEU scores for each bucket for the seq2seq and parse2seq baselines and the lexicalized multi-source system. The results are in Figure FIGREF15 .
In line with previous work on NMT on long sentences BIBREF0 , BIBREF8 , we see a significant deterioration in BLEU for longer sentences for all systems. In particular, although the parse2seq model outperformed the seq2seq model overall, it does worse than seq2seq for sentences containing more than 30 words. This indicates that parse2seq performance does indeed suffer due to its long sequences. On the other hand, the multi-source system outperforms the seq2seq baseline for all sentence lengths and does particularly well for sentences with over 50 words. This may be because the multi-source system has both sequential and parsed input, so it can rely more on sequential input for very long sentences.
Conclusion
In this paper, we presented a multi-source method for effectively incorporating linearized parses of the source data into neural machine translation. This method, in which the parsed and sequential versions of the sentence were both taken as input during training and inference, resulted in gains of up to 1.5 BLEU on EN INLINEFORM0 DE translation. In addition, unlike parse2seq, the multi-source model translated reasonably well even when the source sentence was not parsed.
In the future, we will explore adding back-translated BIBREF18 or copied BIBREF19 target data to our multi-source system. The multi-source model does not require all training data to be parsed; thus, monolingual data can be used even if the parser is unreliable for the synthetic or copied source sentences.
Acknowledgments
This work was funded by the Amazon Academic Research Awards program. | improvements of up to 1.5 BLEU over the seq2seq baseline |
e8e00b4c0673af5ab02ec82563105e4157cc54bb | e8e00b4c0673af5ab02ec82563105e4157cc54bb_0 | Q: How were their results compared to state-of-the-art?
Text: Introduction
Machine Translation, which is a field of concentrate under common language preparing, focuses at deciphering normal language naturally utilizing machines. Information driven machine interpretation has turned into the overwhelming field of concentrate because of the availability of substantial parallel corpora. The primary target of information driven machine interpretation is to decipher concealed source language, given that the frameworks take in interpretation learning from sentence adjusted bi-lingual preparing information.
Statistical Machine Translation (SMT) is an information driven methodology which utilizes probabilistic models to catch the interpretation procedure. Early models in SMT depended on generative models accepting a word as the fundamental element BIBREF0, greatest entropy based discriminative models utilizing highlights gained from sentences BIBREF1, straightforward and various leveled phrases BIBREF2, BIBREF3. These strategies have been extraordinarily utilized since 2002 regardless of the way that discriminative models looked with the test of information sparsity. Discrete word based portrayals made SMT vulnerable to learning poor gauge on the record of low check occasions. Additionally, structuring highlights for SMT physically is a troublesome errand and require area language, which is hard remembering the assortment and intricacy of various common dialects.
Later years have seen the extraordinary accomplishment of deep learning applications in machine interpretation. Deep learning approaches have surpassed factual strategies in practically all sub-fields of MT and have turned into the de facto technique in both scholarly world just as in the business. as a major aspect of this theory, we will talk about the two spaces where deep learning has been significantly utilized in MT. We will quickly examine Component or Domain based deep learning strategies for machine translation BIBREF4 which utilizes deep learning models to improve the viability of various parts utilized in SMT including language models, transition models, and re-organizing models. Our primary spotlight in on end-to-end deep learning models for machine translation BIBREF5, BIBREF6 that utilizes neural systems to separate correspondence between a source and target language straightforwardly in an all encompassing way without utilizing any hand-created highlights. These models are currently perceived as Neural Machine translation (NMT).
Let $x$ signify the source language and $y$ mean the objective language, given a lot of model parameters $\theta $ , the point of any machine interpretation calculation is to discover the interpretation having greatest likelihood $\hat{y}$:
The decision rule is re-written using Bayes' rule as BIBREF0:
Where $P(y;\theta _{lm})$ is called as language model, and $P(x|y;\theta _{tm})$ is called as transition model. The interpretation model likewise, is characterized as generative model, which is crumbled by means of dormant structures.
Where, $z$ signifies the idle structures like word arrangement between source language and target language.
End-to-End Deep Learning for Machine translation
Start to finish Machine Translation models BIBREF5, BIBREF6 likewise named as Neural Machine Translation (NMT), intends to discover a correspondence among source and target normal dialects with the assistance of deep neural systems. The fundamental distinction among NMT and customary Statistical Machine Translation (SMT) BIBREF0, BIBREF7, BIBREF2, BIBREF1 based methodologies is that Neural model are fit for learning complex connections among characteristic dialects straightforwardly from the information, without turning to manual hand highlights, which are difficult to plan.
The standard issue in Machine Translation continues as before, given an arrangement of words in source language sentence $X = x_{1},....x_{j},....x_{J}$ and target language sentence $Y = y_{1},....y_{i},....y_{I}$, NMT endeavors to factor sentence level interpretation likelihood into setting dependant sub-word interpretation probabilities.
Here $y_{<i}$ is alluded to as fractional interpretation. There can be sparsity among setting among source and target sentence when the sentences become excessively long, to tackle this issue, BIBREF5 proposed an encoder-decoder arrange which could speak to variable length sentence to a fixed length vector portrayal and utilize this conveyed vector to decipher sentences.
End-to-End Deep Learning for Machine translation ::: Encoder Decoder Framework for Machine Translation
Neural Machine Translation models stick to an Encoder-Decoder engineering, the job of encoder is to speak to subjective length sentences to a fixed length genuine vector which is named as setting vector. This setting vector contains all the fundamental highlights which can be construed from the source sentence itself. The decoder arrange accepts this vector as contribution to yield target sentence word by word. The perfect decoder is relied upon to yield sentence which contains the full setting of source language sentence. FigureFIGREF10 shows the overall architecture of the encoder-decoder neural network for machine translation.
Since source and target sentences are ordinarily of various lengths, Initially BIBREF5 proposed Recurrent Neural Network for both encoder and decoder systems, To address the issue of evaporating angle and detonating slopes happening because of conditions among word sets, Long Short Term Memory (LSTM) BIBREF8 and Gated Recurrent Unit (GRU) BIBREF9 were proposed rather than Vanilla RNN cell.
Training in NMT is done by maximising log-likelihood as the objective function:
Where $L(\theta )$ is defined as:
After training, learned parameters $\hat{\theta }$ is used for translation as:
End-to-End Deep Learning for Machine translation ::: Attention Mechanism in Neural Machine Translation
The Encoder organize proposed by BIBREF5 spoke to source language sentence into a fixed length vector which was in this way used by the Decoder arrange, through observational testing, it was seen that the nature of interpretation incredibly relied upon the span of source sentence and diminished essentially by expanding the sentence measure.
To address this issue, BIBREF6 proposed to coordinate an Attention system inside the Encoder arrange and demonstrated this could progressively choose significant parts of setting in source sentence to deliver target sentence. They utilized Bi-directional RNN (BRNN's) to catch worldwide settings:
The forward hidden state $\overrightarrow{h_{s}}$ and backward hidden state $\overleftarrow{h_{s}}$ are concatenated to capture sentence level context.
The basic Ideology behind computing attention is to seek portions of interest in source text in order to generate target words in text, this is performed by computing attention weights first.
Where $a(t_{j-1},h_{i},\theta )$ is the alignment function which evaluates how well inputs are aligned with respect to position $i$ and output at position $i$. Context vector $c_{j}$ is computed as a weighted sum of hidden states of the source.
And target hidden state is computed as follows.
In Figure FIGREF18, we have attention mechanism at the encoder level, the context vector is then used by the decoder layer for language translation. The distinction between consideration based NMT BIBREF6 from unique encoder-decoder based engineering BIBREF5 is the way source setting is registered, in unique encoder-decoder, the source's shrouded state is utilized to introduce target's underlying concealed state while in consideration instrument, a weighted aggregate of concealed state is utilized which ensures that the significance of every single source word in the sentence is very much protected in the specific circumstance. This incredibly improves the execution of interpretation and hence this has turned into the state of art model in neural machine interpretation.
Neural Architectures for NMT
The majority of the encoder-decoder based NMT models have used RNN and It's variations LSTM BIBREF8 and GRU BIBREF9. As of late, Convolution systems (CNN) BIBREF10 and self consideration systems BIBREF11 have been examined and have delivered promising outcomes.
The issue with utilizing Recurrent systems in NMT is that it works by sequential calculation and necessities to keep up it's concealed advance at each progression of preparing. This makes the preparation deeply wasteful and tedious. BIBREF10 proposed that convolution systems can, interestingly, become familiar with the fixed length shrouded states utilizing convolution task. The principle preferred standpoint of this methodology being that convolution task doesn't rely upon recently figured qualities and can be parallelized for multi-center preparing. Additionally Convolution systems can be stacked in a steady progression to learn further setting settling on it a perfect decision for both the encoder and decoder.
Intermittent systems process reliance among words in a sentence in $O(n)$ while Convolution system can accomplish the equivalent in $O(log_{k}n)$ where $k$ is the extent of convolution part.
BIBREF11 proposed a model which could register the reliance among each word pair in a sentence utilizing just the Attention layer stacked in a steady progression in both the encoder and decoder, he named this as self-attention, the overall architecture is shown as Figure FIGREF19. In their model, concealed state is figured utilizing self-consideration and feed forward system, they utilize positional encoding to present the element dependent on the area of word in the sentence and their self-consideration layer named as multi-head attention is very parallelizable. This model has appeared to be exceedingly parallelizable due to before referenced reason and fundamentally accelerates NMT preparing, likewise bringing about preferable outcomes over the benchmark Recurrent system based models.
As of now, there is no clear decision regarding which neural architecture is the best and different architectures give different results depending on the problem in hand. Neural architecture is still considered to be the hottest and most active research field in Neural Machine Translation.
Research gaps and open problems
deep learning strategies have altered the field of Machine Translation, with early endeavors concentrating on improving the key segments of Statistical Machine Translation like word arrangement BIBREF12 , interpretation model BIBREF2, BIBREF13, and expression reordering BIBREF14, BIBREF15 and language model BIBREF16. Since 2010, a large portion of the exploration has been moved towards creating start to finish neural models that could relieve the need of broad component designing BIBREF5, BIBREF6. Neural models have effectively supplanted Statistical models since their commencement in all scholarly and modern application.
Albeit Deep learning has quickened look into in Machine Translation people group yet regardless, Current NMT models are not free from blemishes and has certain constraints. In this segment, we will depict some current research issues in NMT, our point is to control specialists and researchers working in this field to get to know these issues and work towards it for considerably quicker improvement in the field.
Research gaps and open problems ::: Neural models motivated by semantic approaches
Start to finish models have been named as the de facto model in Machine Translation, yet it is difficult to decipher the inner calculation of neural systems which is frequently essentially said to be the "Black Box" approach. One conceivable zone of research is to grow etymologically propelled neural models for better interpretability. It is difficult to perceive learning from concealed condition of current neural systems and thus it is similarly hard to join earlier information which is emblematic in nature into consistent portrayal of these states BIBREF17.
Research gaps and open problems ::: Light weight neural models for learning through inadequate data
Another real disadvantage for NMT is information shortage, It is surely known that NMT models are information hungry and requires a great many preparing cases for giving best outcomes. The issue emerges when there isn't sufficient parallel corpora present for the majority of the language matches on the planet. In this way fabricating models that can adapt better than average portrayal utilizing generally littler informational collection is an effectively inquired about issue today. One comparative issue is to create one-to-numerous and many-to-numerous language models rather than balanced models. Analysts don't know how to normal information utilizing neural system from an etymological point of view, as this learning will help create multi-lingual interpretation models rather than balanced models utilized today.
Research gaps and open problems ::: Multi-modular Neural Architectures for present data
One more issue is to create multi-modular language interpretation models. Practically all the work done has been founded on printed information. Research on creating nonstop portrayal combining content, discourse and visual information to create multi-model frameworks is going all out. Additionally since there is constrained or no multi-model parallel corpora present, advancement of such databases is likewise a fascinating field to investigate and can likewise profit multi-modular neural designs.
Research gaps and open problems ::: Parallel and conveyed calculations for preparing neural models
At long last, current neural designs depend intensely broad calculation control for giving skillful outcomes BIBREF18, BIBREF19, BIBREF20. In spite of the fact that there is no figure and capacity lack in current situation, yet it would be increasingly proficient to thought of light neural models of language interpretation. Additionally Recurrent models BIBREF5, BIBREF6 can't be parallelized because of which it is difficult to create conveyed frameworks for model preparing. Luckily, late advancements, with the rise of Convolution systems and self-consideration Networks can be parallelized and therefore disseminated among various frameworks. But since they contain a great many related parameters, it makes it difficult to circulate them among inexactly coupled frameworks. Along these lines growing light neural designs intended to be circulated can be new likely wilderness of NMT.
Methodology
The proposed methodology can be broken down to several atomic objectives. The first step is the Acquisition of parallel corpora, the next step is to pre-process the data acquired. Various neural models is to be implemented and trained on the pre-processed data. The last part of our study is to compare the results obtained by the models and do a comparative study.
Methodology ::: Data Acquisition and preparation
For this study, we intend to work with is the English-Hindi parallel corpus, curated and made publically available by the Center of Indian Language Technologies (CFILT), Indian Institute of Technology, Bombay BIBREF21. Table TABREF25 shows the number of parallel sentences in the train and test data. This parallel datasets contains more than 1.5 million parallel sentences for training and testing purpose, to the best our knowledge, there is no literature present till date indicating any comparative study done based upon the Neural models on this dataset.
After getting our data in an unzipped form, the next part in our pipeline is to decompose rare words in our corpora using subword byte pair encoding BIBREF22. Byte pair encoding is a useful approach when we have an extremely large vocabulary which hinders model training and thus we can decompose those rare words into common subwords and build the vocabulary accordingly.To encode the training corpora using BPE, we need to generate BPE operations first. The following command will create a file named bpe32k, which contains 32k BPE operations.It also outputs two dictionaries named vocab.de and vocab.en. Similar methodology is applied for Hindi-english data as well.
Model components
For this study, sequence-to-sequence LSTM network and Attention based encoder-decoder using GRU cell have been implemented. Self attention Transformer network has been implemented and all the models are tested side by side to create a clear superiority distinction among them. The basic theory of model components used is given below.
Model components ::: RNN Cell
The basic neural cell present in Neural network works well for several problems but fails miserably when the order of data matters, as a result these models fails to generalize and solve problems which deals with temporal or sequential data. To reason behind this failure being that the basic neural cell doesn't take into account the previous or backward information for it's computation and using the same philosophy the basic RNN cell was developed. Recurrent Neural Network (RNN) are such network having recurrent cells which are capable of incorporating past information with current information in terms of value computation and as a result these models have seen huge success in problems dealing with sequential input like problems coming under the domain of Natural Language Processing, weather forecast and other such problems.
The basic mathematical equations underlying the RNN cell are Described below:
Here $x_{t}$ and $y_{t}$ are the input and output at the $t^{th}$ time step, $W_{hh}$, $W_{xh}$ and $W_{hy}$ are connection weights respectively.
Model components ::: GRU Cell
Although the RNN cell has outperformed non-sequential neural networks but they fail to generalize to problems having longer sequence length, the problem arises due to not able to capture long term dependencies among the sequential units and this phenomena is termed as the vanishing gradient problem, To solve this problem BIBREF9 proposed a Gated approach to explicitly caputure long term memory using different cells, this cell was termed as Gated Recurrent Unit (GRU). The schematic diagram of GRU cell is given in Figure FIGREF35.
The difference between GRU cell and RNN cell lies at the computation of Hidden cell values, GRU uses two gates update ($z$) and reset ($r$) to capture long term dependancies. The mathematical equations behind the computation are given below.
Model components ::: LSTM Cell
Short for Long Short Term Memory, Given by BIBREF8 is another approach to overcome the vanishing gradient problem in RNN, like GRU, LSTM uses gating mechanism but it uses three gates instead of two cells in GRU to capture long Term Dependencies. The schematic diagram of LSTM cell is given in Figure FIGREF35.
LSTM cell uses the input ($i$), output ($o$) and forget ($f$) gates for computation of hidden states respectively, the equations are similar to that of GRU cell, LSTM like GRU, uses sigmoid activation for adding non-linearity to the function.
Model components ::: Attention Mechanism
Attention mechanism was first developed by BIBREF6, in their paper “Neural Machine Translation by Jointly Learning to Align and Translate” which takes in as a natural extension of their previous work on the sequence to sequence Encoder-Decoder model. Attention is proposed as a solution to mitigate the limitation of the Encoder-Decoder architecture which encodes the input sequence to one fixed length vector from which the output is decoded at each time step. This problem seems to be more of a issue when decoding long sequences. Attention is proposed as a singular method to both align and translate. Alignment is the problem in machine translation that seeks to find which parts of the input sequence are relevant to each word in the output, whereas translation is the process of using the relevant information to select the appropriate output.
Model components ::: Transformer Network
We use transformer self-attention encoder for our study. The transformer model BIBREF11 is made up of $M$ consecutive blocks. Each block of the transformer, denoted by $transformer_{l}$, contains two separate components, multi-head attention and a feed forward network. The output of each token $j$ of block $l$ is connected to it's input in a residual connection. The input to the first block is $b_{j}^{0} = x_{j}$.
Multi-head attention applies self-attention over the same inputs multiple times by using separately normalized parameters (attention heads) and finally concatenates the results of each head, multi-head attention mechansim is considered as a better alternative to applying one pass of attention with more parameters as the former process can be easily parallelized. Furthermore, computing the attention with multiple heads make it easier for the model to learn and attend to different types of relevant information with each head. The self-attention updates input $b_{j}^{l-1}$ by computing a weighted sum over all tokens in the sequence, weighted by their importance for modeling token $j$.
Each input, inside the multi-head attention is projected to query, key and value ($q,k,v$) respectively. $q,k$ and $v$ are all of dimensions $\in \mathbb {R}^{d/H}$, where $H$ is the number of heads and $d$ is the dimension of embedding. the attention weights $a_{mnh}$ for head $h$ between token $m$ and $n$ is given by scaled dot product between
Finally the output of each head in multi-head attention in concatenated serially.
Experiments and results ::: Neural models
For this study, self attention based transformer network is implemented and compared using Sequence-to-sequence and attention based encoder decoder neural architectures. All the implementation and coding part is done using the above mentioned programming framework. We train all the three models in an end to end manner using CFILT Hindi-English parallel corpora and the results from all the three models are compared keeping in mind the usage of similar hyper-parameter values for ease of comparison.
For the sequence-to-sequence model, we are using LSTM cell for computation since there is no attention mechanism involved and it is desirable for the model to capture long term dependencies. Since LSTM works far better than GRU cell in terms of capturing long term dependencies we chose to go with it. The embedding layer, hidden layer is taken to be of 512 dimension, both encoder and the decoder part contains 2 as the number of hidden layers. For regularization, we are using dropout and set the value to be 20 percent. Batch size of 128 is taken.
For the attention based RNN search, we are using GRU cell for computation since attention mechanism is already employed and will capture long term dependencies explicitly using attention value. GRU cell is computationally efficient in terms of computation as compared with LSTM cell. Like in our sequence-to-sequence model, the embedding layer, hidden layer is taken to be of 512 dimension, both encoder and the decoder part contains 2 as the number of hidden layers. For regularization, we are using dropout and set the value to be 20 percent. Batch size of 128 is taken. For the self attention Transformer network, we are using hidden and embedding layer to be of size 512, for each encoder and decoder, we fix the number of layers of self-attention to be 4. In each layer, we assign 8 parallel attention heads and the hidden size of feed forward neural network is taken to be 1024 in each cell. attention dropout and residual dropout is taken to be 10 percent. Table TABREF47: shows the number of trainable parameters in each of our three models.
The optimizer used for our study is the Adam optimizer, taking learning_rate decay value of 0.001, $\beta 1$ value of 0.9 and $\beta 2$ value of 0.98 for first order and second order gradient moments respectively. The accuracy metric taken is log-loss error between the predicted and actual sentence word. We are trying to optimize the log-loss error between the predicted and target words of the sentence. All the models are trained on 100000 steps, where is each step, a batch size of 128 is taken for calculating the loss function. The primary objective of our training is to minimize the log-loss error between the source and target sentences and simultaneously maximize the metric which is chosen to be the BLEU score BIBREF23.
Results
Table TABREF48 shows the BLEU score of all three models based on English-Hindi, Hindi-English on CFILT's test dataset respectively. From the results which we get, it is evident that the transformer model achieves higher BLEU score than both Attention encoder-decoder and sequence-sequence model. Attention encoder-decoder achieves better BLEU score and sequence-sequence model performs the worst out of the three which further consolidates the point that if we are dealing with long source and target sentences then attention mechanism is very much required to capture long term dependencies and we can solely rely on the attention mechanism, overthrowing recurrent cells completely for the machine translation task.
Figure FIGREF49 shows the word-word association heat map for selected translated and target sentences when transformer model is trained on English-Hindi translation task and similarly Figure FIGREF50 shows the word-word association heat map for selected translated and target sentences when transformer model is trained on Hindi-English translation task.
Conclusion
In this paper, we initially discussed about Machine translation. We started our discussion from a brief discussion on basic Machine translation objective and terminologies along with early Statistical approaches (SMT). We then discussed the role of deep learning models in improving different components of SMT, Then we shifted our discussion on end-to-end neural machine translation (NMT). Our discussion was largely based on the basic encoder-decoder based NMT, attention based model. We finally listed the challenges in Neural Translation models and mentioned future fields of study and open ended problems. Later we proposed a self-attention transformer network for Hindi-English language translation and compare this model with other neural machine translation models on the basis of BLEU. We concluded our study by delineating the advantages and disadvantages of all the three models. | transformer model achieves higher BLEU score than both Attention encoder-decoder and sequence-sequence model |
18ad60f97f53af64cb9db2123c0d8846c57bfa4a | 18ad60f97f53af64cb9db2123c0d8846c57bfa4a_0 | Q: What supports the claim that injected CNN into recurent units will enhance ability of the model to catch local context and reduce ambiguities?
Text: Introduction
Neural network based approaches have become popular frameworks in many machine learning research fields, showing its advantages over traditional methods. In NLP tasks, two types of neural networks are widely used: Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN).
RNNs are powerful models in various NLP tasks, such as machine translation BIBREF0, sentiment classification BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, reading comprehension BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, etc. The recurrent neural networks can flexibly model different lengths of sequences into a fixed representation. There are two main implementations of RNN: Long Short-Term Memory (LSTM) BIBREF12 and Gated Recurrent Unit (GRU) BIBREF0, which solve the gradient vanishing problems in vanilla RNNs.
Compared to RNN, the CNN model also shows competitive performances in some tasks, such as text classification BIBREF13, etc. However, different from RNN, CNN sets a pre-defined convolutional kernel to “summarize” a fixed window of adjacent elements into blended representations, showing its ability of modeling local context.
As both global and local information is important in most of NLP tasks BIBREF14, in this paper, we propose a novel recurrent unit, called Contextual Recurrent Unit (CRU). The proposed CRU model adopts advantages of RNN and CNN, where CNN is good at modeling local context, and RNN is superior in capturing long-term dependencies. We propose three variants of our CRU model: shallow fusion, deep fusion and deep-enhanced fusion.
To verify the effectiveness of our CRU model, we utilize it into two different NLP tasks: sentiment classification and reading comprehension, where the former is sentence-level modeling, and the latter is document-level modeling. In the sentiment classification task, we build a standard neural network and replace the recurrent unit by our CRU model. To further demonstrate the effectiveness of our model, we also tested our CRU in reading comprehension tasks with a strengthened baseline system originated from Attention-over-Attention Reader (AoA Reader) BIBREF10. Experimental results on public datasets show that our CRU model could substantially outperform various systems by a large margin, and set up new state-of-the-art performances on related datasets. The main contributions of our work are listed as follows.
[leftmargin=*]
We propose a novel neural recurrent unit called Contextual Recurrent Unit (CRU), which effectively incorporate the advantage of CNN and RNN. Different from previous works, our CRU model shows its excellent flexibility as GRU and provides better performance.
The CRU model is applied to both sentence-level and document-level modeling tasks and gives state-of-the-art performances.
The CRU could also give substantial improvements in cloze-style reading comprehension task when the baseline system is strengthened by incorporating additional features which will enrich the representations of unknown words and make the texts more readable to the machine.
Related Works
Gated recurrent unit (GRU) has been proposed in the scenario of neural machine translations BIBREF0. It has been shown that the GRU has comparable performance in some tasks compared to the LSTM. Another advantage of GRU is that it has a simpler neural architecture than LSTM, showing a much efficient computation.
However, convolutional neural network (CNN) is not as popular as RNNs in NLP tasks, as the texts are formed temporally. But in some studies, CNN shows competitive performance to the RNN models, such as text classification BIBREF13.
Various efforts have been made on combining CNN and RNN. BIBREF3 proposed an architecture that combines CNN and GRU model with pre-trained word embeddings by word2vec. BIBREF5 proposed to combine asymmetric convolution neural network with the bidirectional LSTM network. BIBREF4 presented Dependency Sensitive CNN, which hierarchically construct text by using LSTMs and extracting features with convolution operations subsequently. BIBREF15 propose to make use of dependency relations information in the shortest dependency path (SDP) by combining CNN and two-channel LSTM units. BIBREF16 build a neural network for dialogue topic tracking where the CNN used to account for semantics at individual utterance and RNN for modeling conversational contexts along multiple turns in history.
The difference between our CRU model and previous works can be concluded as follows.
[leftmargin=*]
Our CRU model could adaptively control the amount of information that flows into different gates, which was not studied in previous works.
Also, the CRU does not introduce a pooling operation, as opposed to other works, such as CNN-GRU BIBREF3. Our motivation is to provide flexibility as the original GRU, while the pooling operation breaks this law (the output length is changed), and it is unable to do exact word-level attention over the output. However, in our CRU model, the output length is the same as the input's and can be easily applied to various tasks where the GRU used to.
We also observed that by only using CNN to conclude contextual information is not strong enough. So we incorporate the original word embeddings to form a "word + context" representation for enhancement.
Our approach
In this section, we will give a detailed introduction to our CRU model. Firstly, we will give a brief introduction to GRU BIBREF0 as preliminaries, and then three variants of our CRU model will be illustrated.
Our approach ::: Gated Recurrent Unit
Gated Recurrent Unit (GRU) is a type of recurrent unit that models sequential data BIBREF0, which is similar to LSTM but is much simpler and computationally effective than the latter one. We will briefly introduce the formulation of GRU. Given a sequence $x = \lbrace x_1, x_2, ..., x_n\rbrace $, GRU will process the data in the following ways. For simplicity, the bias term is omitted in the following equations.
where $z_t$ is the update gate, $r_t$ is the reset gate, and non-linear function $\sigma $ is often chosen as $sigmoid$ function. In many NLP tasks, we often use a bi-directional GRU, which takes both forward and backward information into account.
Our approach ::: Contextual Recurrent Unit
By only modeling word-level representation may have drawbacks in representing the word that has different meanings when the context varies. Here is an example that shows this problem.
There are many fan mails in the mailbox.
There are many fan makers in the factory.
As we can see that, though two sentences share the same beginning before the word fan, the meanings of the word fan itself are totally different when we meet the following word mails and makers. The first fan means “a person that has strong interests in a person or thing", and the second one means “a machine with rotating blades for ventilation". However, the embedding of word fan does not discriminate according to the context. Also, as two sentences have the same beginning, when we apply a recurrent operation (such as GRU) till the word fan, the output of GRU does not change, though they have entirely different meanings when we see the following words.
To enrich the word representation with local contextual information and diminishing the word ambiguities, we propose a model as an extension to the GRU, called Contextual Recurrent Unit (CRU). In this model, we take full advantage of the convolutional neural network and recurrent neural network, where the former is good at modeling local information, and the latter is capable of capturing long-term dependencies. Moreover, in the experiment part, we will also show that our bidirectional CRU could also significantly outperform the bidirectional GRU model.
In this paper, we propose three different types of CRU models: shallow fusion, deep fusion and deep-enhanced fusion, from the most fundamental one to the most expressive one. We will describe these models in detail in the following sections.
Our approach ::: Contextual Recurrent Unit ::: Shallow Fusion
The most simple one is to directly apply a CNN layer after the embedding layer to obtain blended contextual representations. Then a GRU layer is applied afterward. We call this model as shallow fusion, because the CNN and RNN are applied linearly without changing inner architectures of both.
Formally, when given a sequential data $x = \lbrace x_1, x_2, ..., x_n\rbrace $, a shallow fusion of CRU can be illustrated as follows.
We first transform word $x_t$ into word embeddings through an embedding matrix $W_e$. Then a convolutional operation $\phi $ is applied to the context of $e_t$, denoted as $\widetilde{e_t}$, to obtain contextual representations. Finally, the contextual representation $c_t$ is fed into GRU units.
Following BIBREF13, we apply embedding-wise convolution operation, which is commonly used in natural language processing tasks. Let $e_{i:j} \in \mathbb {R}^{\mathcal {\\}j*d}$ denote the concatenation of $j-i+1$ consecutive $d$-dimensional word embeddings.
The embedding-wise convolution is to apply a convolution filter w $\in \mathbb {R}^{\mathcal {\\}k*d}$ to a window of $k$ word embeddings to generate a new feature, i.e., summarizing a local context of $k$ words. This can be formulated as
where $f$ is a non-linear function and $b$ is the bias.
By applying the convolutional filter to all possible windows in the sentence, a feature map $c$ will be generated. In this paper, we apply a same-length convolution (length of the sentence does not change), i.e. $c \in \mathbb {R}^{\mathcal {\\}n*1}$. Then we apply $d$ filters with the same window size to obtain multiple feature maps. So the final output of CNN has the shape of $C \in \mathbb {R}^{\mathcal {\\}n*d}$, which is exactly the same size as $n$ word embeddings, which enables us to do exact word-level attention in various tasks.
Our approach ::: Contextual Recurrent Unit ::: Deep Fusion
The contextual information that flows into the update gate and reset gate of GRU is identical in shallow fusion. In order to let the model adaptively control the amount of information that flows into these gates, we can embed CNN into GRU in a deep manner. We can rewrite the Equation 1 to 3 of GRU as follows.
where $\phi _z, \phi _r, \phi $ are three different CNN layers, i.e., the weights are not shared. When the weights share across these CNNs, the deep fusion will be degraded to shallow fusion.
Our approach ::: Contextual Recurrent Unit ::: Deep-Enhanced Fusion
In shallow fusion and deep fusion, we used the convolutional operation to summarize the context. However, one drawback of them is that the original word embedding might be blurred by blending the words around it, i.e., applying the convolutional operation on its context.
For better modeling the original word and its context, we enhanced the deep fusion model with original word embedding information, with an intuition of “enriching word representation with contextual information while preserving its basic meaning”. Figure FIGREF17 illustrates our motivations.
Formally, the Equation 9 to 11 can be further rewritten into
where we add original word embedding $e_t$ after the CNN operation, to “enhance” the original word information while not losing the contextual information that has learned from CNNs.
Applications
The proposed CRU model is a general neural recurrent unit, so we could apply it to various NLP tasks. As we wonder whether the CRU model could give improvements in both sentence-level modeling and document-level modeling tasks, in this paper, we applied the CRU model to two NLP tasks: sentiment classification and cloze-style reading comprehension. In the sentiment classification task, we build a simple neural model and applied our CRU. In the cloze-style reading comprehension task, we first present some modifications to a recent reading comprehension model, called AoA Reader BIBREF10, and then replace the GRU part by our CRU model to see if our model could give substantial improvements over strong baselines.
Applications ::: Sentiment Classification
In the sentiment classification task, we aim to classify movie reviews, where one movie review will be classified into the positive/negative or subjective/objective category. A general neural network architecture for this task is depicted in Figure FIGREF20.
First, the movie review is transformed into word embeddings. And then, a sequence modeling module is applied, in which we can adopt LSTM, GRU, or our CRU, to capture the inner relations of the text. In this paper, we adopt bidirectional recurrent units for modeling sentences, and then the final hidden outputs are concatenated. After that, a fully connected layer will be added after sequence modeling. Finally, the binary decision is made through a single $sigmoid$ unit.
As shown, we employed a straightforward neural architecture to this task, as we purely want to compare our CRU model against other sequential models. The detailed experimental result of sentiment classification will be given in the next section.
Applications ::: Reading Comprehension
Besides the sentiment classification task, we also tried our CRU model in cloze-style reading comprehension, which is a much complicated task. In this paper, we strengthened the recent AoA Reader BIBREF10 and applied our CRU model to see if we could obtain substantial improvements when the baseline is strengthened.
Applications ::: Reading Comprehension ::: Task Description
The cloze-style reading comprehension is a fundamental task that explores relations between the document and the query. Formally, a general cloze-style query can be illustrated as a triple $\langle {\mathcal {D}}, {\mathcal {Q}}, {\mathcal {A}} \rangle $, where $\mathcal {D}$ is the document, $\mathcal {Q}$ is the query and the answer $\mathcal {A}$. Note that the answer is a single word in the document, which requires us to exploit the relationship between the document and query.
Applications ::: Reading Comprehension ::: Modified AoA Reader
In this section, we briefly introduce the original AoA Reader BIBREF10, and illustrate our modifications. When a cloze-style training triple $\langle \mathcal {D}, \mathcal {Q}, \mathcal {A} \rangle $ is given, the Modified AoA Reader will be constructed in the following steps. First, the document and query will be transformed into continuous representations with the embedding layer and recurrent layer. The recurrent layer can be the simple RNN, GRU, LSTM, or our CRU model.
To further strengthen the representation power, we show a simple modification in the embedding layer, where we found strong empirical results in performance. The main idea is to utilize additional sparse features of the word and add (concatenate) these features to the word embeddings to enrich the word representations. The additional features have shown effective in various models BIBREF7, BIBREF17, BIBREF11. In this paper, we adopt two additional features in document word embeddings (no features applied to the query side).
$\bullet $ Document word frequency: Calculate each document word frequency. This helps the model to pay more attention to the important (more mentioned) part of the document.
$\bullet $ Count of query word: Count the number of each document word appeared in the query. For example, if a document word appears three times in the query, then the feature value will be 3. We empirically find that instead of using binary features (appear=1, otherwise=0) BIBREF17, indicating the count of the word provides more information, suggesting that the more a word occurs in the query, the less possible the answer it will be. We replace the Equation 16 with the following formulation (query side is not changed),
where $freq(x)$ and $CoQ(x)$ are the features that introduced above.
Other parts of the model remain the same as the original AoA Reader. For simplicity, we will omit this part, and the detailed illustrations can be found in BIBREF10.
Experiments: Sentiment Classification ::: Experimental Setups
In the sentiment classification task, we tried our model on the following public datasets.
[leftmargin=*]
MR Movie reviews with one sentence each. Each review is classified into positive or negative BIBREF18.
IMDB Movie reviews from IMDB website, where each movie review is labeled with binary classes, either positive or negative BIBREF19. Note that each movie review may contain several sentences.
SUBJ$^1$ Movie review labeled with subjective or objective BIBREF20.
The statistics and hyper-parameter settings of these datasets are listed in Table TABREF33.
As these datasets are quite small and overfit easily, we employed $l_2$-regularization of 0.0001 to the embedding layer in all datasets. Also, we applied dropout BIBREF21 to the output of the embedding layer and fully connected layer. The fully connected layer has a dimension of 1024. In the MR and SUBJ, the embedding layer is initialized with 200-dimensional GloVe embeddings (trained on 840B token) BIBREF22 and fine-tuned during the training process. In the IMDB condition, the vocabulary is truncated by descending word frequency order. We adopt batched training strategy of 32 samples with ADAM optimizer BIBREF23, and clipped gradient to 5 BIBREF24. Unless indicated, the convolutional filter length is set to 3, and ReLU for the non-linear function of CNN in all experiments. We use 10-fold cross-validation (CV) in the dataset that has no train/valid/test division.
Experiments: Sentiment Classification ::: Results
The experimental results are shown in Table TABREF35. As we mentioned before, all RNNs in these models are bi-directional, because we wonder if our bi-CRU could still give substantial improvements over bi-GRU which could capture both history and future information. As we can see that, all variants of our CRU model could give substantial improvements over the traditional GRU model, where a maximum gain of 2.7%, 1.0%, and 1.9% can be observed in three datasets, respectively. We also found that though we adopt a straightforward classification model, our CRU model could outperform the state-of-the-art systems by 0.6%, 0.7%, and 0.8% gains respectively, which demonstrate its effectiveness. By employing more sophisticated architecture or introducing task-specific features, we think there is still much room for further improvements, which is beyond the scope of this paper.
When comparing three variants of the CRU model, as we expected, the CRU with deep-enhanced fusion performs best among them. This demonstrates that by incorporating contextual representations with original word embedding could enhance the representation power. Also, we noticed that when we tried a larger window size of the convolutional filter, i.e., 5 in this experiment, does not give a rise in the performance. We plot the trends of MR test set accuracy with the increasing convolutional filter length, as shown in Figure FIGREF36.
As we can see that, using a smaller convolutional filter does not provide much contextual information, thus giving a lower accuracy. On the contrary, the larger filters generally outperform the lower ones, but not always. One possible reason for this is that when the filter becomes larger, the amortized contextual information is less than a smaller filter, and make it harder for the model to learn the contextual information. However, we think the proper size of the convolutional filter may vary task by task. Some tasks that require long-span contextual information may benefit from a larger filter.
We also compared our CRU model with related works that combine CNN and RNN BIBREF3, BIBREF4, BIBREF5. From the results, we can see that our CRU model significantly outperforms previous works, which demonstrates that by employing deep fusion and enhancing the contextual representations with original embeddings could substantially improve the power of word representations.
On another aspect, we plot the trends of IMDB test set accuracy during the training process, as depicted in Figure FIGREF37. As we can see that, after iterating six epochs of training data, all variants of CRU models show faster convergence speed and smaller performance fluctuation than the traditional GRU model, which demonstrates that the proposed CRU model has better training stability.
Experiments: Reading Comprehension ::: Experimental Setups
We also tested our CRU model in the cloze-style reading comprehension task. We carried out experiments on the public datasets: CBT NE/CN BIBREF25. The CRU model used in these experiments is the deep-enhanced type with the convolutional filter length of 3. In the re-ranking step, we also utilized three features: Global LM, Local LM, Word-class LM, as proposed by BIBREF10, and all LMs are 8-gram trained by SRILM toolkit BIBREF27. For other settings, such as hyperparameters, initializations, etc., we closely follow the experimental setups as BIBREF10 to make the experiments more comparable.
Experiments: Reading Comprehension ::: Results
The overall experimental results are given in Table TABREF38. As we can see that our proposed models can substantially outperform various state-of-the-art systems by a large margin.
[leftmargin=*]
Overall, our final model (M-AoA Reader + CRU + Re-ranking) could give significant improvements over the previous state-of-the-art systems by 2.1% and 1.4% in test sets, while re-ranking and ensemble bring further improvements.
When comparing M-AoA Reader to the original AoA Reader, 1.8% and 0.4% improvements can be observed, suggesting that by incorporating additional features into embedding can enrich the power of word representation. Incorporating more additional features in the word embeddings would have another boost in the results, but we leave this in future work.
Replacing GRU with our CRU could significantly improve the performance, where 1.6% and 1.5% gains can be obtained when compared to M-AoA Reader. This demonstrates that incorporating contextual information when modeling the sentence could enrich the representations. Also, when modeling an unknown word, except for its randomly initialized word embedding, the contextual information could give a possible guess of the unknown word, making the text more readable to the neural networks.
The re-ranking strategy is an effective approach in this task. We observed that the gains in the common noun category are significantly greater than the named entity. One possible reason is that the language model is much beneficial to CN than NE, because it is much more likely to meet a new named entity that is not covered in the training data than the common noun.
Qualitative Analysis
In this section, we will give a qualitative analysis on our proposed CRU model in the sentiment classification task. We focus on two categories of the movie reviews, which is quite harder for the model to judge the correct sentiment. The first one is the movie review that contains negation terms, such as “not”. The second type is the one contains sentiment transition, such as “clever but not compelling”. We manually select 50 samples of each category in the MR dataset, forming a total of 100 samples to see if our CRU model is superior in handling these movie reviews. The results are shown in Table TABREF45. As we can see that, our CRU model is better at both categories of movie review classification, demonstrating its effectiveness.
Among these samples, we select an intuitive example that the CRU successfully captures the true meaning of the sentence and gives the correct sentiment label. We segment a full movie review into three sentences, which is shown in Table TABREF46.
Regarding the first and second sentence, both models give correct sentiment prediction. While introducing the third sentence, the GRU baseline model failed to recognize this review as a positive sentiment because there are many negation terms in the sentence. However, our CRU model could capture the local context during the recurrent modeling the sentence, and the phrases such as “not making fun” and “not laughing at” could be correctly noted as positive sentiment which will correct the sentiment category of the full review, suggesting that our model is superior at modeling local context and gives much accurate meaning.
Conclusion
In this paper, we proposed an effective recurrent model for modeling sequences, called Contextual Recurrent Units (CRU). We inject the CNN into GRU, which aims to better model the local context information via CNN before recurrently modeling the sequence. We have tested our CRU model on the cloze-style reading comprehension task and sentiment classification task. Experimental results show that our model could give substantial improvements over various state-of-the-art systems and set up new records on the respective public datasets. In the future, we plan to investigate convolutional filters that have dynamic lengths to adaptively capture the possible spans of its context. | word embeddings to generate a new feature, i.e., summarizing a local context |
87357448ce4cae3c59d4570a19c7a9df4c086bd8 | 87357448ce4cae3c59d4570a19c7a9df4c086bd8_0 | Q: How is CNN injected into recurent units?
Text: Introduction
Neural network based approaches have become popular frameworks in many machine learning research fields, showing its advantages over traditional methods. In NLP tasks, two types of neural networks are widely used: Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN).
RNNs are powerful models in various NLP tasks, such as machine translation BIBREF0, sentiment classification BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, reading comprehension BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, etc. The recurrent neural networks can flexibly model different lengths of sequences into a fixed representation. There are two main implementations of RNN: Long Short-Term Memory (LSTM) BIBREF12 and Gated Recurrent Unit (GRU) BIBREF0, which solve the gradient vanishing problems in vanilla RNNs.
Compared to RNN, the CNN model also shows competitive performances in some tasks, such as text classification BIBREF13, etc. However, different from RNN, CNN sets a pre-defined convolutional kernel to “summarize” a fixed window of adjacent elements into blended representations, showing its ability of modeling local context.
As both global and local information is important in most of NLP tasks BIBREF14, in this paper, we propose a novel recurrent unit, called Contextual Recurrent Unit (CRU). The proposed CRU model adopts advantages of RNN and CNN, where CNN is good at modeling local context, and RNN is superior in capturing long-term dependencies. We propose three variants of our CRU model: shallow fusion, deep fusion and deep-enhanced fusion.
To verify the effectiveness of our CRU model, we utilize it into two different NLP tasks: sentiment classification and reading comprehension, where the former is sentence-level modeling, and the latter is document-level modeling. In the sentiment classification task, we build a standard neural network and replace the recurrent unit by our CRU model. To further demonstrate the effectiveness of our model, we also tested our CRU in reading comprehension tasks with a strengthened baseline system originated from Attention-over-Attention Reader (AoA Reader) BIBREF10. Experimental results on public datasets show that our CRU model could substantially outperform various systems by a large margin, and set up new state-of-the-art performances on related datasets. The main contributions of our work are listed as follows.
[leftmargin=*]
We propose a novel neural recurrent unit called Contextual Recurrent Unit (CRU), which effectively incorporate the advantage of CNN and RNN. Different from previous works, our CRU model shows its excellent flexibility as GRU and provides better performance.
The CRU model is applied to both sentence-level and document-level modeling tasks and gives state-of-the-art performances.
The CRU could also give substantial improvements in cloze-style reading comprehension task when the baseline system is strengthened by incorporating additional features which will enrich the representations of unknown words and make the texts more readable to the machine.
Related Works
Gated recurrent unit (GRU) has been proposed in the scenario of neural machine translations BIBREF0. It has been shown that the GRU has comparable performance in some tasks compared to the LSTM. Another advantage of GRU is that it has a simpler neural architecture than LSTM, showing a much efficient computation.
However, convolutional neural network (CNN) is not as popular as RNNs in NLP tasks, as the texts are formed temporally. But in some studies, CNN shows competitive performance to the RNN models, such as text classification BIBREF13.
Various efforts have been made on combining CNN and RNN. BIBREF3 proposed an architecture that combines CNN and GRU model with pre-trained word embeddings by word2vec. BIBREF5 proposed to combine asymmetric convolution neural network with the bidirectional LSTM network. BIBREF4 presented Dependency Sensitive CNN, which hierarchically construct text by using LSTMs and extracting features with convolution operations subsequently. BIBREF15 propose to make use of dependency relations information in the shortest dependency path (SDP) by combining CNN and two-channel LSTM units. BIBREF16 build a neural network for dialogue topic tracking where the CNN used to account for semantics at individual utterance and RNN for modeling conversational contexts along multiple turns in history.
The difference between our CRU model and previous works can be concluded as follows.
[leftmargin=*]
Our CRU model could adaptively control the amount of information that flows into different gates, which was not studied in previous works.
Also, the CRU does not introduce a pooling operation, as opposed to other works, such as CNN-GRU BIBREF3. Our motivation is to provide flexibility as the original GRU, while the pooling operation breaks this law (the output length is changed), and it is unable to do exact word-level attention over the output. However, in our CRU model, the output length is the same as the input's and can be easily applied to various tasks where the GRU used to.
We also observed that by only using CNN to conclude contextual information is not strong enough. So we incorporate the original word embeddings to form a "word + context" representation for enhancement.
Our approach
In this section, we will give a detailed introduction to our CRU model. Firstly, we will give a brief introduction to GRU BIBREF0 as preliminaries, and then three variants of our CRU model will be illustrated.
Our approach ::: Gated Recurrent Unit
Gated Recurrent Unit (GRU) is a type of recurrent unit that models sequential data BIBREF0, which is similar to LSTM but is much simpler and computationally effective than the latter one. We will briefly introduce the formulation of GRU. Given a sequence $x = \lbrace x_1, x_2, ..., x_n\rbrace $, GRU will process the data in the following ways. For simplicity, the bias term is omitted in the following equations.
where $z_t$ is the update gate, $r_t$ is the reset gate, and non-linear function $\sigma $ is often chosen as $sigmoid$ function. In many NLP tasks, we often use a bi-directional GRU, which takes both forward and backward information into account.
Our approach ::: Contextual Recurrent Unit
By only modeling word-level representation may have drawbacks in representing the word that has different meanings when the context varies. Here is an example that shows this problem.
There are many fan mails in the mailbox.
There are many fan makers in the factory.
As we can see that, though two sentences share the same beginning before the word fan, the meanings of the word fan itself are totally different when we meet the following word mails and makers. The first fan means “a person that has strong interests in a person or thing", and the second one means “a machine with rotating blades for ventilation". However, the embedding of word fan does not discriminate according to the context. Also, as two sentences have the same beginning, when we apply a recurrent operation (such as GRU) till the word fan, the output of GRU does not change, though they have entirely different meanings when we see the following words.
To enrich the word representation with local contextual information and diminishing the word ambiguities, we propose a model as an extension to the GRU, called Contextual Recurrent Unit (CRU). In this model, we take full advantage of the convolutional neural network and recurrent neural network, where the former is good at modeling local information, and the latter is capable of capturing long-term dependencies. Moreover, in the experiment part, we will also show that our bidirectional CRU could also significantly outperform the bidirectional GRU model.
In this paper, we propose three different types of CRU models: shallow fusion, deep fusion and deep-enhanced fusion, from the most fundamental one to the most expressive one. We will describe these models in detail in the following sections.
Our approach ::: Contextual Recurrent Unit ::: Shallow Fusion
The most simple one is to directly apply a CNN layer after the embedding layer to obtain blended contextual representations. Then a GRU layer is applied afterward. We call this model as shallow fusion, because the CNN and RNN are applied linearly without changing inner architectures of both.
Formally, when given a sequential data $x = \lbrace x_1, x_2, ..., x_n\rbrace $, a shallow fusion of CRU can be illustrated as follows.
We first transform word $x_t$ into word embeddings through an embedding matrix $W_e$. Then a convolutional operation $\phi $ is applied to the context of $e_t$, denoted as $\widetilde{e_t}$, to obtain contextual representations. Finally, the contextual representation $c_t$ is fed into GRU units.
Following BIBREF13, we apply embedding-wise convolution operation, which is commonly used in natural language processing tasks. Let $e_{i:j} \in \mathbb {R}^{\mathcal {\\}j*d}$ denote the concatenation of $j-i+1$ consecutive $d$-dimensional word embeddings.
The embedding-wise convolution is to apply a convolution filter w $\in \mathbb {R}^{\mathcal {\\}k*d}$ to a window of $k$ word embeddings to generate a new feature, i.e., summarizing a local context of $k$ words. This can be formulated as
where $f$ is a non-linear function and $b$ is the bias.
By applying the convolutional filter to all possible windows in the sentence, a feature map $c$ will be generated. In this paper, we apply a same-length convolution (length of the sentence does not change), i.e. $c \in \mathbb {R}^{\mathcal {\\}n*1}$. Then we apply $d$ filters with the same window size to obtain multiple feature maps. So the final output of CNN has the shape of $C \in \mathbb {R}^{\mathcal {\\}n*d}$, which is exactly the same size as $n$ word embeddings, which enables us to do exact word-level attention in various tasks.
Our approach ::: Contextual Recurrent Unit ::: Deep Fusion
The contextual information that flows into the update gate and reset gate of GRU is identical in shallow fusion. In order to let the model adaptively control the amount of information that flows into these gates, we can embed CNN into GRU in a deep manner. We can rewrite the Equation 1 to 3 of GRU as follows.
where $\phi _z, \phi _r, \phi $ are three different CNN layers, i.e., the weights are not shared. When the weights share across these CNNs, the deep fusion will be degraded to shallow fusion.
Our approach ::: Contextual Recurrent Unit ::: Deep-Enhanced Fusion
In shallow fusion and deep fusion, we used the convolutional operation to summarize the context. However, one drawback of them is that the original word embedding might be blurred by blending the words around it, i.e., applying the convolutional operation on its context.
For better modeling the original word and its context, we enhanced the deep fusion model with original word embedding information, with an intuition of “enriching word representation with contextual information while preserving its basic meaning”. Figure FIGREF17 illustrates our motivations.
Formally, the Equation 9 to 11 can be further rewritten into
where we add original word embedding $e_t$ after the CNN operation, to “enhance” the original word information while not losing the contextual information that has learned from CNNs.
Applications
The proposed CRU model is a general neural recurrent unit, so we could apply it to various NLP tasks. As we wonder whether the CRU model could give improvements in both sentence-level modeling and document-level modeling tasks, in this paper, we applied the CRU model to two NLP tasks: sentiment classification and cloze-style reading comprehension. In the sentiment classification task, we build a simple neural model and applied our CRU. In the cloze-style reading comprehension task, we first present some modifications to a recent reading comprehension model, called AoA Reader BIBREF10, and then replace the GRU part by our CRU model to see if our model could give substantial improvements over strong baselines.
Applications ::: Sentiment Classification
In the sentiment classification task, we aim to classify movie reviews, where one movie review will be classified into the positive/negative or subjective/objective category. A general neural network architecture for this task is depicted in Figure FIGREF20.
First, the movie review is transformed into word embeddings. And then, a sequence modeling module is applied, in which we can adopt LSTM, GRU, or our CRU, to capture the inner relations of the text. In this paper, we adopt bidirectional recurrent units for modeling sentences, and then the final hidden outputs are concatenated. After that, a fully connected layer will be added after sequence modeling. Finally, the binary decision is made through a single $sigmoid$ unit.
As shown, we employed a straightforward neural architecture to this task, as we purely want to compare our CRU model against other sequential models. The detailed experimental result of sentiment classification will be given in the next section.
Applications ::: Reading Comprehension
Besides the sentiment classification task, we also tried our CRU model in cloze-style reading comprehension, which is a much complicated task. In this paper, we strengthened the recent AoA Reader BIBREF10 and applied our CRU model to see if we could obtain substantial improvements when the baseline is strengthened.
Applications ::: Reading Comprehension ::: Task Description
The cloze-style reading comprehension is a fundamental task that explores relations between the document and the query. Formally, a general cloze-style query can be illustrated as a triple $\langle {\mathcal {D}}, {\mathcal {Q}}, {\mathcal {A}} \rangle $, where $\mathcal {D}$ is the document, $\mathcal {Q}$ is the query and the answer $\mathcal {A}$. Note that the answer is a single word in the document, which requires us to exploit the relationship between the document and query.
Applications ::: Reading Comprehension ::: Modified AoA Reader
In this section, we briefly introduce the original AoA Reader BIBREF10, and illustrate our modifications. When a cloze-style training triple $\langle \mathcal {D}, \mathcal {Q}, \mathcal {A} \rangle $ is given, the Modified AoA Reader will be constructed in the following steps. First, the document and query will be transformed into continuous representations with the embedding layer and recurrent layer. The recurrent layer can be the simple RNN, GRU, LSTM, or our CRU model.
To further strengthen the representation power, we show a simple modification in the embedding layer, where we found strong empirical results in performance. The main idea is to utilize additional sparse features of the word and add (concatenate) these features to the word embeddings to enrich the word representations. The additional features have shown effective in various models BIBREF7, BIBREF17, BIBREF11. In this paper, we adopt two additional features in document word embeddings (no features applied to the query side).
$\bullet $ Document word frequency: Calculate each document word frequency. This helps the model to pay more attention to the important (more mentioned) part of the document.
$\bullet $ Count of query word: Count the number of each document word appeared in the query. For example, if a document word appears three times in the query, then the feature value will be 3. We empirically find that instead of using binary features (appear=1, otherwise=0) BIBREF17, indicating the count of the word provides more information, suggesting that the more a word occurs in the query, the less possible the answer it will be. We replace the Equation 16 with the following formulation (query side is not changed),
where $freq(x)$ and $CoQ(x)$ are the features that introduced above.
Other parts of the model remain the same as the original AoA Reader. For simplicity, we will omit this part, and the detailed illustrations can be found in BIBREF10.
Experiments: Sentiment Classification ::: Experimental Setups
In the sentiment classification task, we tried our model on the following public datasets.
[leftmargin=*]
MR Movie reviews with one sentence each. Each review is classified into positive or negative BIBREF18.
IMDB Movie reviews from IMDB website, where each movie review is labeled with binary classes, either positive or negative BIBREF19. Note that each movie review may contain several sentences.
SUBJ$^1$ Movie review labeled with subjective or objective BIBREF20.
The statistics and hyper-parameter settings of these datasets are listed in Table TABREF33.
As these datasets are quite small and overfit easily, we employed $l_2$-regularization of 0.0001 to the embedding layer in all datasets. Also, we applied dropout BIBREF21 to the output of the embedding layer and fully connected layer. The fully connected layer has a dimension of 1024. In the MR and SUBJ, the embedding layer is initialized with 200-dimensional GloVe embeddings (trained on 840B token) BIBREF22 and fine-tuned during the training process. In the IMDB condition, the vocabulary is truncated by descending word frequency order. We adopt batched training strategy of 32 samples with ADAM optimizer BIBREF23, and clipped gradient to 5 BIBREF24. Unless indicated, the convolutional filter length is set to 3, and ReLU for the non-linear function of CNN in all experiments. We use 10-fold cross-validation (CV) in the dataset that has no train/valid/test division.
Experiments: Sentiment Classification ::: Results
The experimental results are shown in Table TABREF35. As we mentioned before, all RNNs in these models are bi-directional, because we wonder if our bi-CRU could still give substantial improvements over bi-GRU which could capture both history and future information. As we can see that, all variants of our CRU model could give substantial improvements over the traditional GRU model, where a maximum gain of 2.7%, 1.0%, and 1.9% can be observed in three datasets, respectively. We also found that though we adopt a straightforward classification model, our CRU model could outperform the state-of-the-art systems by 0.6%, 0.7%, and 0.8% gains respectively, which demonstrate its effectiveness. By employing more sophisticated architecture or introducing task-specific features, we think there is still much room for further improvements, which is beyond the scope of this paper.
When comparing three variants of the CRU model, as we expected, the CRU with deep-enhanced fusion performs best among them. This demonstrates that by incorporating contextual representations with original word embedding could enhance the representation power. Also, we noticed that when we tried a larger window size of the convolutional filter, i.e., 5 in this experiment, does not give a rise in the performance. We plot the trends of MR test set accuracy with the increasing convolutional filter length, as shown in Figure FIGREF36.
As we can see that, using a smaller convolutional filter does not provide much contextual information, thus giving a lower accuracy. On the contrary, the larger filters generally outperform the lower ones, but not always. One possible reason for this is that when the filter becomes larger, the amortized contextual information is less than a smaller filter, and make it harder for the model to learn the contextual information. However, we think the proper size of the convolutional filter may vary task by task. Some tasks that require long-span contextual information may benefit from a larger filter.
We also compared our CRU model with related works that combine CNN and RNN BIBREF3, BIBREF4, BIBREF5. From the results, we can see that our CRU model significantly outperforms previous works, which demonstrates that by employing deep fusion and enhancing the contextual representations with original embeddings could substantially improve the power of word representations.
On another aspect, we plot the trends of IMDB test set accuracy during the training process, as depicted in Figure FIGREF37. As we can see that, after iterating six epochs of training data, all variants of CRU models show faster convergence speed and smaller performance fluctuation than the traditional GRU model, which demonstrates that the proposed CRU model has better training stability.
Experiments: Reading Comprehension ::: Experimental Setups
We also tested our CRU model in the cloze-style reading comprehension task. We carried out experiments on the public datasets: CBT NE/CN BIBREF25. The CRU model used in these experiments is the deep-enhanced type with the convolutional filter length of 3. In the re-ranking step, we also utilized three features: Global LM, Local LM, Word-class LM, as proposed by BIBREF10, and all LMs are 8-gram trained by SRILM toolkit BIBREF27. For other settings, such as hyperparameters, initializations, etc., we closely follow the experimental setups as BIBREF10 to make the experiments more comparable.
Experiments: Reading Comprehension ::: Results
The overall experimental results are given in Table TABREF38. As we can see that our proposed models can substantially outperform various state-of-the-art systems by a large margin.
[leftmargin=*]
Overall, our final model (M-AoA Reader + CRU + Re-ranking) could give significant improvements over the previous state-of-the-art systems by 2.1% and 1.4% in test sets, while re-ranking and ensemble bring further improvements.
When comparing M-AoA Reader to the original AoA Reader, 1.8% and 0.4% improvements can be observed, suggesting that by incorporating additional features into embedding can enrich the power of word representation. Incorporating more additional features in the word embeddings would have another boost in the results, but we leave this in future work.
Replacing GRU with our CRU could significantly improve the performance, where 1.6% and 1.5% gains can be obtained when compared to M-AoA Reader. This demonstrates that incorporating contextual information when modeling the sentence could enrich the representations. Also, when modeling an unknown word, except for its randomly initialized word embedding, the contextual information could give a possible guess of the unknown word, making the text more readable to the neural networks.
The re-ranking strategy is an effective approach in this task. We observed that the gains in the common noun category are significantly greater than the named entity. One possible reason is that the language model is much beneficial to CN than NE, because it is much more likely to meet a new named entity that is not covered in the training data than the common noun.
Qualitative Analysis
In this section, we will give a qualitative analysis on our proposed CRU model in the sentiment classification task. We focus on two categories of the movie reviews, which is quite harder for the model to judge the correct sentiment. The first one is the movie review that contains negation terms, such as “not”. The second type is the one contains sentiment transition, such as “clever but not compelling”. We manually select 50 samples of each category in the MR dataset, forming a total of 100 samples to see if our CRU model is superior in handling these movie reviews. The results are shown in Table TABREF45. As we can see that, our CRU model is better at both categories of movie review classification, demonstrating its effectiveness.
Among these samples, we select an intuitive example that the CRU successfully captures the true meaning of the sentence and gives the correct sentiment label. We segment a full movie review into three sentences, which is shown in Table TABREF46.
Regarding the first and second sentence, both models give correct sentiment prediction. While introducing the third sentence, the GRU baseline model failed to recognize this review as a positive sentiment because there are many negation terms in the sentence. However, our CRU model could capture the local context during the recurrent modeling the sentence, and the phrases such as “not making fun” and “not laughing at” could be correctly noted as positive sentiment which will correct the sentiment category of the full review, suggesting that our model is superior at modeling local context and gives much accurate meaning.
Conclusion
In this paper, we proposed an effective recurrent model for modeling sequences, called Contextual Recurrent Units (CRU). We inject the CNN into GRU, which aims to better model the local context information via CNN before recurrently modeling the sequence. We have tested our CRU model on the cloze-style reading comprehension task and sentiment classification task. Experimental results show that our model could give substantial improvements over various state-of-the-art systems and set up new records on the respective public datasets. In the future, we plan to investigate convolutional filters that have dynamic lengths to adaptively capture the possible spans of its context. | The most simple one is to directly apply a CNN layer after the embedding layer to obtain blended contextual representations. Then a GRU layer is applied afterward. |
1ccc4f63268aa7841cc6fd23535c9cbe85791007 | 1ccc4f63268aa7841cc6fd23535c9cbe85791007_0 | Q: Are there some results better than state of the art on these tasks?
Text: Introduction
Neural network based approaches have become popular frameworks in many machine learning research fields, showing its advantages over traditional methods. In NLP tasks, two types of neural networks are widely used: Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN).
RNNs are powerful models in various NLP tasks, such as machine translation BIBREF0, sentiment classification BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, reading comprehension BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, etc. The recurrent neural networks can flexibly model different lengths of sequences into a fixed representation. There are two main implementations of RNN: Long Short-Term Memory (LSTM) BIBREF12 and Gated Recurrent Unit (GRU) BIBREF0, which solve the gradient vanishing problems in vanilla RNNs.
Compared to RNN, the CNN model also shows competitive performances in some tasks, such as text classification BIBREF13, etc. However, different from RNN, CNN sets a pre-defined convolutional kernel to “summarize” a fixed window of adjacent elements into blended representations, showing its ability of modeling local context.
As both global and local information is important in most of NLP tasks BIBREF14, in this paper, we propose a novel recurrent unit, called Contextual Recurrent Unit (CRU). The proposed CRU model adopts advantages of RNN and CNN, where CNN is good at modeling local context, and RNN is superior in capturing long-term dependencies. We propose three variants of our CRU model: shallow fusion, deep fusion and deep-enhanced fusion.
To verify the effectiveness of our CRU model, we utilize it into two different NLP tasks: sentiment classification and reading comprehension, where the former is sentence-level modeling, and the latter is document-level modeling. In the sentiment classification task, we build a standard neural network and replace the recurrent unit by our CRU model. To further demonstrate the effectiveness of our model, we also tested our CRU in reading comprehension tasks with a strengthened baseline system originated from Attention-over-Attention Reader (AoA Reader) BIBREF10. Experimental results on public datasets show that our CRU model could substantially outperform various systems by a large margin, and set up new state-of-the-art performances on related datasets. The main contributions of our work are listed as follows.
[leftmargin=*]
We propose a novel neural recurrent unit called Contextual Recurrent Unit (CRU), which effectively incorporate the advantage of CNN and RNN. Different from previous works, our CRU model shows its excellent flexibility as GRU and provides better performance.
The CRU model is applied to both sentence-level and document-level modeling tasks and gives state-of-the-art performances.
The CRU could also give substantial improvements in cloze-style reading comprehension task when the baseline system is strengthened by incorporating additional features which will enrich the representations of unknown words and make the texts more readable to the machine.
Related Works
Gated recurrent unit (GRU) has been proposed in the scenario of neural machine translations BIBREF0. It has been shown that the GRU has comparable performance in some tasks compared to the LSTM. Another advantage of GRU is that it has a simpler neural architecture than LSTM, showing a much efficient computation.
However, convolutional neural network (CNN) is not as popular as RNNs in NLP tasks, as the texts are formed temporally. But in some studies, CNN shows competitive performance to the RNN models, such as text classification BIBREF13.
Various efforts have been made on combining CNN and RNN. BIBREF3 proposed an architecture that combines CNN and GRU model with pre-trained word embeddings by word2vec. BIBREF5 proposed to combine asymmetric convolution neural network with the bidirectional LSTM network. BIBREF4 presented Dependency Sensitive CNN, which hierarchically construct text by using LSTMs and extracting features with convolution operations subsequently. BIBREF15 propose to make use of dependency relations information in the shortest dependency path (SDP) by combining CNN and two-channel LSTM units. BIBREF16 build a neural network for dialogue topic tracking where the CNN used to account for semantics at individual utterance and RNN for modeling conversational contexts along multiple turns in history.
The difference between our CRU model and previous works can be concluded as follows.
[leftmargin=*]
Our CRU model could adaptively control the amount of information that flows into different gates, which was not studied in previous works.
Also, the CRU does not introduce a pooling operation, as opposed to other works, such as CNN-GRU BIBREF3. Our motivation is to provide flexibility as the original GRU, while the pooling operation breaks this law (the output length is changed), and it is unable to do exact word-level attention over the output. However, in our CRU model, the output length is the same as the input's and can be easily applied to various tasks where the GRU used to.
We also observed that by only using CNN to conclude contextual information is not strong enough. So we incorporate the original word embeddings to form a "word + context" representation for enhancement.
Our approach
In this section, we will give a detailed introduction to our CRU model. Firstly, we will give a brief introduction to GRU BIBREF0 as preliminaries, and then three variants of our CRU model will be illustrated.
Our approach ::: Gated Recurrent Unit
Gated Recurrent Unit (GRU) is a type of recurrent unit that models sequential data BIBREF0, which is similar to LSTM but is much simpler and computationally effective than the latter one. We will briefly introduce the formulation of GRU. Given a sequence $x = \lbrace x_1, x_2, ..., x_n\rbrace $, GRU will process the data in the following ways. For simplicity, the bias term is omitted in the following equations.
where $z_t$ is the update gate, $r_t$ is the reset gate, and non-linear function $\sigma $ is often chosen as $sigmoid$ function. In many NLP tasks, we often use a bi-directional GRU, which takes both forward and backward information into account.
Our approach ::: Contextual Recurrent Unit
By only modeling word-level representation may have drawbacks in representing the word that has different meanings when the context varies. Here is an example that shows this problem.
There are many fan mails in the mailbox.
There are many fan makers in the factory.
As we can see that, though two sentences share the same beginning before the word fan, the meanings of the word fan itself are totally different when we meet the following word mails and makers. The first fan means “a person that has strong interests in a person or thing", and the second one means “a machine with rotating blades for ventilation". However, the embedding of word fan does not discriminate according to the context. Also, as two sentences have the same beginning, when we apply a recurrent operation (such as GRU) till the word fan, the output of GRU does not change, though they have entirely different meanings when we see the following words.
To enrich the word representation with local contextual information and diminishing the word ambiguities, we propose a model as an extension to the GRU, called Contextual Recurrent Unit (CRU). In this model, we take full advantage of the convolutional neural network and recurrent neural network, where the former is good at modeling local information, and the latter is capable of capturing long-term dependencies. Moreover, in the experiment part, we will also show that our bidirectional CRU could also significantly outperform the bidirectional GRU model.
In this paper, we propose three different types of CRU models: shallow fusion, deep fusion and deep-enhanced fusion, from the most fundamental one to the most expressive one. We will describe these models in detail in the following sections.
Our approach ::: Contextual Recurrent Unit ::: Shallow Fusion
The most simple one is to directly apply a CNN layer after the embedding layer to obtain blended contextual representations. Then a GRU layer is applied afterward. We call this model as shallow fusion, because the CNN and RNN are applied linearly without changing inner architectures of both.
Formally, when given a sequential data $x = \lbrace x_1, x_2, ..., x_n\rbrace $, a shallow fusion of CRU can be illustrated as follows.
We first transform word $x_t$ into word embeddings through an embedding matrix $W_e$. Then a convolutional operation $\phi $ is applied to the context of $e_t$, denoted as $\widetilde{e_t}$, to obtain contextual representations. Finally, the contextual representation $c_t$ is fed into GRU units.
Following BIBREF13, we apply embedding-wise convolution operation, which is commonly used in natural language processing tasks. Let $e_{i:j} \in \mathbb {R}^{\mathcal {\\}j*d}$ denote the concatenation of $j-i+1$ consecutive $d$-dimensional word embeddings.
The embedding-wise convolution is to apply a convolution filter w $\in \mathbb {R}^{\mathcal {\\}k*d}$ to a window of $k$ word embeddings to generate a new feature, i.e., summarizing a local context of $k$ words. This can be formulated as
where $f$ is a non-linear function and $b$ is the bias.
By applying the convolutional filter to all possible windows in the sentence, a feature map $c$ will be generated. In this paper, we apply a same-length convolution (length of the sentence does not change), i.e. $c \in \mathbb {R}^{\mathcal {\\}n*1}$. Then we apply $d$ filters with the same window size to obtain multiple feature maps. So the final output of CNN has the shape of $C \in \mathbb {R}^{\mathcal {\\}n*d}$, which is exactly the same size as $n$ word embeddings, which enables us to do exact word-level attention in various tasks.
Our approach ::: Contextual Recurrent Unit ::: Deep Fusion
The contextual information that flows into the update gate and reset gate of GRU is identical in shallow fusion. In order to let the model adaptively control the amount of information that flows into these gates, we can embed CNN into GRU in a deep manner. We can rewrite the Equation 1 to 3 of GRU as follows.
where $\phi _z, \phi _r, \phi $ are three different CNN layers, i.e., the weights are not shared. When the weights share across these CNNs, the deep fusion will be degraded to shallow fusion.
Our approach ::: Contextual Recurrent Unit ::: Deep-Enhanced Fusion
In shallow fusion and deep fusion, we used the convolutional operation to summarize the context. However, one drawback of them is that the original word embedding might be blurred by blending the words around it, i.e., applying the convolutional operation on its context.
For better modeling the original word and its context, we enhanced the deep fusion model with original word embedding information, with an intuition of “enriching word representation with contextual information while preserving its basic meaning”. Figure FIGREF17 illustrates our motivations.
Formally, the Equation 9 to 11 can be further rewritten into
where we add original word embedding $e_t$ after the CNN operation, to “enhance” the original word information while not losing the contextual information that has learned from CNNs.
Applications
The proposed CRU model is a general neural recurrent unit, so we could apply it to various NLP tasks. As we wonder whether the CRU model could give improvements in both sentence-level modeling and document-level modeling tasks, in this paper, we applied the CRU model to two NLP tasks: sentiment classification and cloze-style reading comprehension. In the sentiment classification task, we build a simple neural model and applied our CRU. In the cloze-style reading comprehension task, we first present some modifications to a recent reading comprehension model, called AoA Reader BIBREF10, and then replace the GRU part by our CRU model to see if our model could give substantial improvements over strong baselines.
Applications ::: Sentiment Classification
In the sentiment classification task, we aim to classify movie reviews, where one movie review will be classified into the positive/negative or subjective/objective category. A general neural network architecture for this task is depicted in Figure FIGREF20.
First, the movie review is transformed into word embeddings. And then, a sequence modeling module is applied, in which we can adopt LSTM, GRU, or our CRU, to capture the inner relations of the text. In this paper, we adopt bidirectional recurrent units for modeling sentences, and then the final hidden outputs are concatenated. After that, a fully connected layer will be added after sequence modeling. Finally, the binary decision is made through a single $sigmoid$ unit.
As shown, we employed a straightforward neural architecture to this task, as we purely want to compare our CRU model against other sequential models. The detailed experimental result of sentiment classification will be given in the next section.
Applications ::: Reading Comprehension
Besides the sentiment classification task, we also tried our CRU model in cloze-style reading comprehension, which is a much complicated task. In this paper, we strengthened the recent AoA Reader BIBREF10 and applied our CRU model to see if we could obtain substantial improvements when the baseline is strengthened.
Applications ::: Reading Comprehension ::: Task Description
The cloze-style reading comprehension is a fundamental task that explores relations between the document and the query. Formally, a general cloze-style query can be illustrated as a triple $\langle {\mathcal {D}}, {\mathcal {Q}}, {\mathcal {A}} \rangle $, where $\mathcal {D}$ is the document, $\mathcal {Q}$ is the query and the answer $\mathcal {A}$. Note that the answer is a single word in the document, which requires us to exploit the relationship between the document and query.
Applications ::: Reading Comprehension ::: Modified AoA Reader
In this section, we briefly introduce the original AoA Reader BIBREF10, and illustrate our modifications. When a cloze-style training triple $\langle \mathcal {D}, \mathcal {Q}, \mathcal {A} \rangle $ is given, the Modified AoA Reader will be constructed in the following steps. First, the document and query will be transformed into continuous representations with the embedding layer and recurrent layer. The recurrent layer can be the simple RNN, GRU, LSTM, or our CRU model.
To further strengthen the representation power, we show a simple modification in the embedding layer, where we found strong empirical results in performance. The main idea is to utilize additional sparse features of the word and add (concatenate) these features to the word embeddings to enrich the word representations. The additional features have shown effective in various models BIBREF7, BIBREF17, BIBREF11. In this paper, we adopt two additional features in document word embeddings (no features applied to the query side).
$\bullet $ Document word frequency: Calculate each document word frequency. This helps the model to pay more attention to the important (more mentioned) part of the document.
$\bullet $ Count of query word: Count the number of each document word appeared in the query. For example, if a document word appears three times in the query, then the feature value will be 3. We empirically find that instead of using binary features (appear=1, otherwise=0) BIBREF17, indicating the count of the word provides more information, suggesting that the more a word occurs in the query, the less possible the answer it will be. We replace the Equation 16 with the following formulation (query side is not changed),
where $freq(x)$ and $CoQ(x)$ are the features that introduced above.
Other parts of the model remain the same as the original AoA Reader. For simplicity, we will omit this part, and the detailed illustrations can be found in BIBREF10.
Experiments: Sentiment Classification ::: Experimental Setups
In the sentiment classification task, we tried our model on the following public datasets.
[leftmargin=*]
MR Movie reviews with one sentence each. Each review is classified into positive or negative BIBREF18.
IMDB Movie reviews from IMDB website, where each movie review is labeled with binary classes, either positive or negative BIBREF19. Note that each movie review may contain several sentences.
SUBJ$^1$ Movie review labeled with subjective or objective BIBREF20.
The statistics and hyper-parameter settings of these datasets are listed in Table TABREF33.
As these datasets are quite small and overfit easily, we employed $l_2$-regularization of 0.0001 to the embedding layer in all datasets. Also, we applied dropout BIBREF21 to the output of the embedding layer and fully connected layer. The fully connected layer has a dimension of 1024. In the MR and SUBJ, the embedding layer is initialized with 200-dimensional GloVe embeddings (trained on 840B token) BIBREF22 and fine-tuned during the training process. In the IMDB condition, the vocabulary is truncated by descending word frequency order. We adopt batched training strategy of 32 samples with ADAM optimizer BIBREF23, and clipped gradient to 5 BIBREF24. Unless indicated, the convolutional filter length is set to 3, and ReLU for the non-linear function of CNN in all experiments. We use 10-fold cross-validation (CV) in the dataset that has no train/valid/test division.
Experiments: Sentiment Classification ::: Results
The experimental results are shown in Table TABREF35. As we mentioned before, all RNNs in these models are bi-directional, because we wonder if our bi-CRU could still give substantial improvements over bi-GRU which could capture both history and future information. As we can see that, all variants of our CRU model could give substantial improvements over the traditional GRU model, where a maximum gain of 2.7%, 1.0%, and 1.9% can be observed in three datasets, respectively. We also found that though we adopt a straightforward classification model, our CRU model could outperform the state-of-the-art systems by 0.6%, 0.7%, and 0.8% gains respectively, which demonstrate its effectiveness. By employing more sophisticated architecture or introducing task-specific features, we think there is still much room for further improvements, which is beyond the scope of this paper.
When comparing three variants of the CRU model, as we expected, the CRU with deep-enhanced fusion performs best among them. This demonstrates that by incorporating contextual representations with original word embedding could enhance the representation power. Also, we noticed that when we tried a larger window size of the convolutional filter, i.e., 5 in this experiment, does not give a rise in the performance. We plot the trends of MR test set accuracy with the increasing convolutional filter length, as shown in Figure FIGREF36.
As we can see that, using a smaller convolutional filter does not provide much contextual information, thus giving a lower accuracy. On the contrary, the larger filters generally outperform the lower ones, but not always. One possible reason for this is that when the filter becomes larger, the amortized contextual information is less than a smaller filter, and make it harder for the model to learn the contextual information. However, we think the proper size of the convolutional filter may vary task by task. Some tasks that require long-span contextual information may benefit from a larger filter.
We also compared our CRU model with related works that combine CNN and RNN BIBREF3, BIBREF4, BIBREF5. From the results, we can see that our CRU model significantly outperforms previous works, which demonstrates that by employing deep fusion and enhancing the contextual representations with original embeddings could substantially improve the power of word representations.
On another aspect, we plot the trends of IMDB test set accuracy during the training process, as depicted in Figure FIGREF37. As we can see that, after iterating six epochs of training data, all variants of CRU models show faster convergence speed and smaller performance fluctuation than the traditional GRU model, which demonstrates that the proposed CRU model has better training stability.
Experiments: Reading Comprehension ::: Experimental Setups
We also tested our CRU model in the cloze-style reading comprehension task. We carried out experiments on the public datasets: CBT NE/CN BIBREF25. The CRU model used in these experiments is the deep-enhanced type with the convolutional filter length of 3. In the re-ranking step, we also utilized three features: Global LM, Local LM, Word-class LM, as proposed by BIBREF10, and all LMs are 8-gram trained by SRILM toolkit BIBREF27. For other settings, such as hyperparameters, initializations, etc., we closely follow the experimental setups as BIBREF10 to make the experiments more comparable.
Experiments: Reading Comprehension ::: Results
The overall experimental results are given in Table TABREF38. As we can see that our proposed models can substantially outperform various state-of-the-art systems by a large margin.
[leftmargin=*]
Overall, our final model (M-AoA Reader + CRU + Re-ranking) could give significant improvements over the previous state-of-the-art systems by 2.1% and 1.4% in test sets, while re-ranking and ensemble bring further improvements.
When comparing M-AoA Reader to the original AoA Reader, 1.8% and 0.4% improvements can be observed, suggesting that by incorporating additional features into embedding can enrich the power of word representation. Incorporating more additional features in the word embeddings would have another boost in the results, but we leave this in future work.
Replacing GRU with our CRU could significantly improve the performance, where 1.6% and 1.5% gains can be obtained when compared to M-AoA Reader. This demonstrates that incorporating contextual information when modeling the sentence could enrich the representations. Also, when modeling an unknown word, except for its randomly initialized word embedding, the contextual information could give a possible guess of the unknown word, making the text more readable to the neural networks.
The re-ranking strategy is an effective approach in this task. We observed that the gains in the common noun category are significantly greater than the named entity. One possible reason is that the language model is much beneficial to CN than NE, because it is much more likely to meet a new named entity that is not covered in the training data than the common noun.
Qualitative Analysis
In this section, we will give a qualitative analysis on our proposed CRU model in the sentiment classification task. We focus on two categories of the movie reviews, which is quite harder for the model to judge the correct sentiment. The first one is the movie review that contains negation terms, such as “not”. The second type is the one contains sentiment transition, such as “clever but not compelling”. We manually select 50 samples of each category in the MR dataset, forming a total of 100 samples to see if our CRU model is superior in handling these movie reviews. The results are shown in Table TABREF45. As we can see that, our CRU model is better at both categories of movie review classification, demonstrating its effectiveness.
Among these samples, we select an intuitive example that the CRU successfully captures the true meaning of the sentence and gives the correct sentiment label. We segment a full movie review into three sentences, which is shown in Table TABREF46.
Regarding the first and second sentence, both models give correct sentiment prediction. While introducing the third sentence, the GRU baseline model failed to recognize this review as a positive sentiment because there are many negation terms in the sentence. However, our CRU model could capture the local context during the recurrent modeling the sentence, and the phrases such as “not making fun” and “not laughing at” could be correctly noted as positive sentiment which will correct the sentiment category of the full review, suggesting that our model is superior at modeling local context and gives much accurate meaning.
Conclusion
In this paper, we proposed an effective recurrent model for modeling sequences, called Contextual Recurrent Units (CRU). We inject the CNN into GRU, which aims to better model the local context information via CNN before recurrently modeling the sequence. We have tested our CRU model on the cloze-style reading comprehension task and sentiment classification task. Experimental results show that our model could give substantial improvements over various state-of-the-art systems and set up new records on the respective public datasets. In the future, we plan to investigate convolutional filters that have dynamic lengths to adaptively capture the possible spans of its context. | Yes |
afe34e553c3c784dbf02add675b15c27638cdd45 | afe34e553c3c784dbf02add675b15c27638cdd45_0 | Q: Do experiment results show consistent significant improvement of new approach over traditional CNN and RNN models?
Text: Introduction
Neural network based approaches have become popular frameworks in many machine learning research fields, showing its advantages over traditional methods. In NLP tasks, two types of neural networks are widely used: Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN).
RNNs are powerful models in various NLP tasks, such as machine translation BIBREF0, sentiment classification BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, reading comprehension BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, etc. The recurrent neural networks can flexibly model different lengths of sequences into a fixed representation. There are two main implementations of RNN: Long Short-Term Memory (LSTM) BIBREF12 and Gated Recurrent Unit (GRU) BIBREF0, which solve the gradient vanishing problems in vanilla RNNs.
Compared to RNN, the CNN model also shows competitive performances in some tasks, such as text classification BIBREF13, etc. However, different from RNN, CNN sets a pre-defined convolutional kernel to “summarize” a fixed window of adjacent elements into blended representations, showing its ability of modeling local context.
As both global and local information is important in most of NLP tasks BIBREF14, in this paper, we propose a novel recurrent unit, called Contextual Recurrent Unit (CRU). The proposed CRU model adopts advantages of RNN and CNN, where CNN is good at modeling local context, and RNN is superior in capturing long-term dependencies. We propose three variants of our CRU model: shallow fusion, deep fusion and deep-enhanced fusion.
To verify the effectiveness of our CRU model, we utilize it into two different NLP tasks: sentiment classification and reading comprehension, where the former is sentence-level modeling, and the latter is document-level modeling. In the sentiment classification task, we build a standard neural network and replace the recurrent unit by our CRU model. To further demonstrate the effectiveness of our model, we also tested our CRU in reading comprehension tasks with a strengthened baseline system originated from Attention-over-Attention Reader (AoA Reader) BIBREF10. Experimental results on public datasets show that our CRU model could substantially outperform various systems by a large margin, and set up new state-of-the-art performances on related datasets. The main contributions of our work are listed as follows.
[leftmargin=*]
We propose a novel neural recurrent unit called Contextual Recurrent Unit (CRU), which effectively incorporate the advantage of CNN and RNN. Different from previous works, our CRU model shows its excellent flexibility as GRU and provides better performance.
The CRU model is applied to both sentence-level and document-level modeling tasks and gives state-of-the-art performances.
The CRU could also give substantial improvements in cloze-style reading comprehension task when the baseline system is strengthened by incorporating additional features which will enrich the representations of unknown words and make the texts more readable to the machine.
Related Works
Gated recurrent unit (GRU) has been proposed in the scenario of neural machine translations BIBREF0. It has been shown that the GRU has comparable performance in some tasks compared to the LSTM. Another advantage of GRU is that it has a simpler neural architecture than LSTM, showing a much efficient computation.
However, convolutional neural network (CNN) is not as popular as RNNs in NLP tasks, as the texts are formed temporally. But in some studies, CNN shows competitive performance to the RNN models, such as text classification BIBREF13.
Various efforts have been made on combining CNN and RNN. BIBREF3 proposed an architecture that combines CNN and GRU model with pre-trained word embeddings by word2vec. BIBREF5 proposed to combine asymmetric convolution neural network with the bidirectional LSTM network. BIBREF4 presented Dependency Sensitive CNN, which hierarchically construct text by using LSTMs and extracting features with convolution operations subsequently. BIBREF15 propose to make use of dependency relations information in the shortest dependency path (SDP) by combining CNN and two-channel LSTM units. BIBREF16 build a neural network for dialogue topic tracking where the CNN used to account for semantics at individual utterance and RNN for modeling conversational contexts along multiple turns in history.
The difference between our CRU model and previous works can be concluded as follows.
[leftmargin=*]
Our CRU model could adaptively control the amount of information that flows into different gates, which was not studied in previous works.
Also, the CRU does not introduce a pooling operation, as opposed to other works, such as CNN-GRU BIBREF3. Our motivation is to provide flexibility as the original GRU, while the pooling operation breaks this law (the output length is changed), and it is unable to do exact word-level attention over the output. However, in our CRU model, the output length is the same as the input's and can be easily applied to various tasks where the GRU used to.
We also observed that by only using CNN to conclude contextual information is not strong enough. So we incorporate the original word embeddings to form a "word + context" representation for enhancement.
Our approach
In this section, we will give a detailed introduction to our CRU model. Firstly, we will give a brief introduction to GRU BIBREF0 as preliminaries, and then three variants of our CRU model will be illustrated.
Our approach ::: Gated Recurrent Unit
Gated Recurrent Unit (GRU) is a type of recurrent unit that models sequential data BIBREF0, which is similar to LSTM but is much simpler and computationally effective than the latter one. We will briefly introduce the formulation of GRU. Given a sequence $x = \lbrace x_1, x_2, ..., x_n\rbrace $, GRU will process the data in the following ways. For simplicity, the bias term is omitted in the following equations.
where $z_t$ is the update gate, $r_t$ is the reset gate, and non-linear function $\sigma $ is often chosen as $sigmoid$ function. In many NLP tasks, we often use a bi-directional GRU, which takes both forward and backward information into account.
Our approach ::: Contextual Recurrent Unit
By only modeling word-level representation may have drawbacks in representing the word that has different meanings when the context varies. Here is an example that shows this problem.
There are many fan mails in the mailbox.
There are many fan makers in the factory.
As we can see that, though two sentences share the same beginning before the word fan, the meanings of the word fan itself are totally different when we meet the following word mails and makers. The first fan means “a person that has strong interests in a person or thing", and the second one means “a machine with rotating blades for ventilation". However, the embedding of word fan does not discriminate according to the context. Also, as two sentences have the same beginning, when we apply a recurrent operation (such as GRU) till the word fan, the output of GRU does not change, though they have entirely different meanings when we see the following words.
To enrich the word representation with local contextual information and diminishing the word ambiguities, we propose a model as an extension to the GRU, called Contextual Recurrent Unit (CRU). In this model, we take full advantage of the convolutional neural network and recurrent neural network, where the former is good at modeling local information, and the latter is capable of capturing long-term dependencies. Moreover, in the experiment part, we will also show that our bidirectional CRU could also significantly outperform the bidirectional GRU model.
In this paper, we propose three different types of CRU models: shallow fusion, deep fusion and deep-enhanced fusion, from the most fundamental one to the most expressive one. We will describe these models in detail in the following sections.
Our approach ::: Contextual Recurrent Unit ::: Shallow Fusion
The most simple one is to directly apply a CNN layer after the embedding layer to obtain blended contextual representations. Then a GRU layer is applied afterward. We call this model as shallow fusion, because the CNN and RNN are applied linearly without changing inner architectures of both.
Formally, when given a sequential data $x = \lbrace x_1, x_2, ..., x_n\rbrace $, a shallow fusion of CRU can be illustrated as follows.
We first transform word $x_t$ into word embeddings through an embedding matrix $W_e$. Then a convolutional operation $\phi $ is applied to the context of $e_t$, denoted as $\widetilde{e_t}$, to obtain contextual representations. Finally, the contextual representation $c_t$ is fed into GRU units.
Following BIBREF13, we apply embedding-wise convolution operation, which is commonly used in natural language processing tasks. Let $e_{i:j} \in \mathbb {R}^{\mathcal {\\}j*d}$ denote the concatenation of $j-i+1$ consecutive $d$-dimensional word embeddings.
The embedding-wise convolution is to apply a convolution filter w $\in \mathbb {R}^{\mathcal {\\}k*d}$ to a window of $k$ word embeddings to generate a new feature, i.e., summarizing a local context of $k$ words. This can be formulated as
where $f$ is a non-linear function and $b$ is the bias.
By applying the convolutional filter to all possible windows in the sentence, a feature map $c$ will be generated. In this paper, we apply a same-length convolution (length of the sentence does not change), i.e. $c \in \mathbb {R}^{\mathcal {\\}n*1}$. Then we apply $d$ filters with the same window size to obtain multiple feature maps. So the final output of CNN has the shape of $C \in \mathbb {R}^{\mathcal {\\}n*d}$, which is exactly the same size as $n$ word embeddings, which enables us to do exact word-level attention in various tasks.
Our approach ::: Contextual Recurrent Unit ::: Deep Fusion
The contextual information that flows into the update gate and reset gate of GRU is identical in shallow fusion. In order to let the model adaptively control the amount of information that flows into these gates, we can embed CNN into GRU in a deep manner. We can rewrite the Equation 1 to 3 of GRU as follows.
where $\phi _z, \phi _r, \phi $ are three different CNN layers, i.e., the weights are not shared. When the weights share across these CNNs, the deep fusion will be degraded to shallow fusion.
Our approach ::: Contextual Recurrent Unit ::: Deep-Enhanced Fusion
In shallow fusion and deep fusion, we used the convolutional operation to summarize the context. However, one drawback of them is that the original word embedding might be blurred by blending the words around it, i.e., applying the convolutional operation on its context.
For better modeling the original word and its context, we enhanced the deep fusion model with original word embedding information, with an intuition of “enriching word representation with contextual information while preserving its basic meaning”. Figure FIGREF17 illustrates our motivations.
Formally, the Equation 9 to 11 can be further rewritten into
where we add original word embedding $e_t$ after the CNN operation, to “enhance” the original word information while not losing the contextual information that has learned from CNNs.
Applications
The proposed CRU model is a general neural recurrent unit, so we could apply it to various NLP tasks. As we wonder whether the CRU model could give improvements in both sentence-level modeling and document-level modeling tasks, in this paper, we applied the CRU model to two NLP tasks: sentiment classification and cloze-style reading comprehension. In the sentiment classification task, we build a simple neural model and applied our CRU. In the cloze-style reading comprehension task, we first present some modifications to a recent reading comprehension model, called AoA Reader BIBREF10, and then replace the GRU part by our CRU model to see if our model could give substantial improvements over strong baselines.
Applications ::: Sentiment Classification
In the sentiment classification task, we aim to classify movie reviews, where one movie review will be classified into the positive/negative or subjective/objective category. A general neural network architecture for this task is depicted in Figure FIGREF20.
First, the movie review is transformed into word embeddings. And then, a sequence modeling module is applied, in which we can adopt LSTM, GRU, or our CRU, to capture the inner relations of the text. In this paper, we adopt bidirectional recurrent units for modeling sentences, and then the final hidden outputs are concatenated. After that, a fully connected layer will be added after sequence modeling. Finally, the binary decision is made through a single $sigmoid$ unit.
As shown, we employed a straightforward neural architecture to this task, as we purely want to compare our CRU model against other sequential models. The detailed experimental result of sentiment classification will be given in the next section.
Applications ::: Reading Comprehension
Besides the sentiment classification task, we also tried our CRU model in cloze-style reading comprehension, which is a much complicated task. In this paper, we strengthened the recent AoA Reader BIBREF10 and applied our CRU model to see if we could obtain substantial improvements when the baseline is strengthened.
Applications ::: Reading Comprehension ::: Task Description
The cloze-style reading comprehension is a fundamental task that explores relations between the document and the query. Formally, a general cloze-style query can be illustrated as a triple $\langle {\mathcal {D}}, {\mathcal {Q}}, {\mathcal {A}} \rangle $, where $\mathcal {D}$ is the document, $\mathcal {Q}$ is the query and the answer $\mathcal {A}$. Note that the answer is a single word in the document, which requires us to exploit the relationship between the document and query.
Applications ::: Reading Comprehension ::: Modified AoA Reader
In this section, we briefly introduce the original AoA Reader BIBREF10, and illustrate our modifications. When a cloze-style training triple $\langle \mathcal {D}, \mathcal {Q}, \mathcal {A} \rangle $ is given, the Modified AoA Reader will be constructed in the following steps. First, the document and query will be transformed into continuous representations with the embedding layer and recurrent layer. The recurrent layer can be the simple RNN, GRU, LSTM, or our CRU model.
To further strengthen the representation power, we show a simple modification in the embedding layer, where we found strong empirical results in performance. The main idea is to utilize additional sparse features of the word and add (concatenate) these features to the word embeddings to enrich the word representations. The additional features have shown effective in various models BIBREF7, BIBREF17, BIBREF11. In this paper, we adopt two additional features in document word embeddings (no features applied to the query side).
$\bullet $ Document word frequency: Calculate each document word frequency. This helps the model to pay more attention to the important (more mentioned) part of the document.
$\bullet $ Count of query word: Count the number of each document word appeared in the query. For example, if a document word appears three times in the query, then the feature value will be 3. We empirically find that instead of using binary features (appear=1, otherwise=0) BIBREF17, indicating the count of the word provides more information, suggesting that the more a word occurs in the query, the less possible the answer it will be. We replace the Equation 16 with the following formulation (query side is not changed),
where $freq(x)$ and $CoQ(x)$ are the features that introduced above.
Other parts of the model remain the same as the original AoA Reader. For simplicity, we will omit this part, and the detailed illustrations can be found in BIBREF10.
Experiments: Sentiment Classification ::: Experimental Setups
In the sentiment classification task, we tried our model on the following public datasets.
[leftmargin=*]
MR Movie reviews with one sentence each. Each review is classified into positive or negative BIBREF18.
IMDB Movie reviews from IMDB website, where each movie review is labeled with binary classes, either positive or negative BIBREF19. Note that each movie review may contain several sentences.
SUBJ$^1$ Movie review labeled with subjective or objective BIBREF20.
The statistics and hyper-parameter settings of these datasets are listed in Table TABREF33.
As these datasets are quite small and overfit easily, we employed $l_2$-regularization of 0.0001 to the embedding layer in all datasets. Also, we applied dropout BIBREF21 to the output of the embedding layer and fully connected layer. The fully connected layer has a dimension of 1024. In the MR and SUBJ, the embedding layer is initialized with 200-dimensional GloVe embeddings (trained on 840B token) BIBREF22 and fine-tuned during the training process. In the IMDB condition, the vocabulary is truncated by descending word frequency order. We adopt batched training strategy of 32 samples with ADAM optimizer BIBREF23, and clipped gradient to 5 BIBREF24. Unless indicated, the convolutional filter length is set to 3, and ReLU for the non-linear function of CNN in all experiments. We use 10-fold cross-validation (CV) in the dataset that has no train/valid/test division.
Experiments: Sentiment Classification ::: Results
The experimental results are shown in Table TABREF35. As we mentioned before, all RNNs in these models are bi-directional, because we wonder if our bi-CRU could still give substantial improvements over bi-GRU which could capture both history and future information. As we can see that, all variants of our CRU model could give substantial improvements over the traditional GRU model, where a maximum gain of 2.7%, 1.0%, and 1.9% can be observed in three datasets, respectively. We also found that though we adopt a straightforward classification model, our CRU model could outperform the state-of-the-art systems by 0.6%, 0.7%, and 0.8% gains respectively, which demonstrate its effectiveness. By employing more sophisticated architecture or introducing task-specific features, we think there is still much room for further improvements, which is beyond the scope of this paper.
When comparing three variants of the CRU model, as we expected, the CRU with deep-enhanced fusion performs best among them. This demonstrates that by incorporating contextual representations with original word embedding could enhance the representation power. Also, we noticed that when we tried a larger window size of the convolutional filter, i.e., 5 in this experiment, does not give a rise in the performance. We plot the trends of MR test set accuracy with the increasing convolutional filter length, as shown in Figure FIGREF36.
As we can see that, using a smaller convolutional filter does not provide much contextual information, thus giving a lower accuracy. On the contrary, the larger filters generally outperform the lower ones, but not always. One possible reason for this is that when the filter becomes larger, the amortized contextual information is less than a smaller filter, and make it harder for the model to learn the contextual information. However, we think the proper size of the convolutional filter may vary task by task. Some tasks that require long-span contextual information may benefit from a larger filter.
We also compared our CRU model with related works that combine CNN and RNN BIBREF3, BIBREF4, BIBREF5. From the results, we can see that our CRU model significantly outperforms previous works, which demonstrates that by employing deep fusion and enhancing the contextual representations with original embeddings could substantially improve the power of word representations.
On another aspect, we plot the trends of IMDB test set accuracy during the training process, as depicted in Figure FIGREF37. As we can see that, after iterating six epochs of training data, all variants of CRU models show faster convergence speed and smaller performance fluctuation than the traditional GRU model, which demonstrates that the proposed CRU model has better training stability.
Experiments: Reading Comprehension ::: Experimental Setups
We also tested our CRU model in the cloze-style reading comprehension task. We carried out experiments on the public datasets: CBT NE/CN BIBREF25. The CRU model used in these experiments is the deep-enhanced type with the convolutional filter length of 3. In the re-ranking step, we also utilized three features: Global LM, Local LM, Word-class LM, as proposed by BIBREF10, and all LMs are 8-gram trained by SRILM toolkit BIBREF27. For other settings, such as hyperparameters, initializations, etc., we closely follow the experimental setups as BIBREF10 to make the experiments more comparable.
Experiments: Reading Comprehension ::: Results
The overall experimental results are given in Table TABREF38. As we can see that our proposed models can substantially outperform various state-of-the-art systems by a large margin.
[leftmargin=*]
Overall, our final model (M-AoA Reader + CRU + Re-ranking) could give significant improvements over the previous state-of-the-art systems by 2.1% and 1.4% in test sets, while re-ranking and ensemble bring further improvements.
When comparing M-AoA Reader to the original AoA Reader, 1.8% and 0.4% improvements can be observed, suggesting that by incorporating additional features into embedding can enrich the power of word representation. Incorporating more additional features in the word embeddings would have another boost in the results, but we leave this in future work.
Replacing GRU with our CRU could significantly improve the performance, where 1.6% and 1.5% gains can be obtained when compared to M-AoA Reader. This demonstrates that incorporating contextual information when modeling the sentence could enrich the representations. Also, when modeling an unknown word, except for its randomly initialized word embedding, the contextual information could give a possible guess of the unknown word, making the text more readable to the neural networks.
The re-ranking strategy is an effective approach in this task. We observed that the gains in the common noun category are significantly greater than the named entity. One possible reason is that the language model is much beneficial to CN than NE, because it is much more likely to meet a new named entity that is not covered in the training data than the common noun.
Qualitative Analysis
In this section, we will give a qualitative analysis on our proposed CRU model in the sentiment classification task. We focus on two categories of the movie reviews, which is quite harder for the model to judge the correct sentiment. The first one is the movie review that contains negation terms, such as “not”. The second type is the one contains sentiment transition, such as “clever but not compelling”. We manually select 50 samples of each category in the MR dataset, forming a total of 100 samples to see if our CRU model is superior in handling these movie reviews. The results are shown in Table TABREF45. As we can see that, our CRU model is better at both categories of movie review classification, demonstrating its effectiveness.
Among these samples, we select an intuitive example that the CRU successfully captures the true meaning of the sentence and gives the correct sentiment label. We segment a full movie review into three sentences, which is shown in Table TABREF46.
Regarding the first and second sentence, both models give correct sentiment prediction. While introducing the third sentence, the GRU baseline model failed to recognize this review as a positive sentiment because there are many negation terms in the sentence. However, our CRU model could capture the local context during the recurrent modeling the sentence, and the phrases such as “not making fun” and “not laughing at” could be correctly noted as positive sentiment which will correct the sentiment category of the full review, suggesting that our model is superior at modeling local context and gives much accurate meaning.
Conclusion
In this paper, we proposed an effective recurrent model for modeling sequences, called Contextual Recurrent Units (CRU). We inject the CNN into GRU, which aims to better model the local context information via CNN before recurrently modeling the sequence. We have tested our CRU model on the cloze-style reading comprehension task and sentiment classification task. Experimental results show that our model could give substantial improvements over various state-of-the-art systems and set up new records on the respective public datasets. In the future, we plan to investigate convolutional filters that have dynamic lengths to adaptively capture the possible spans of its context. | Yes |
3f46d8082a753265ec2a88ae8f1beb6651e281b6 | 3f46d8082a753265ec2a88ae8f1beb6651e281b6_0 | Q: What datasets are used for testing sentiment classification and reading comprehension?
Text: Introduction
Neural network based approaches have become popular frameworks in many machine learning research fields, showing its advantages over traditional methods. In NLP tasks, two types of neural networks are widely used: Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN).
RNNs are powerful models in various NLP tasks, such as machine translation BIBREF0, sentiment classification BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, reading comprehension BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, etc. The recurrent neural networks can flexibly model different lengths of sequences into a fixed representation. There are two main implementations of RNN: Long Short-Term Memory (LSTM) BIBREF12 and Gated Recurrent Unit (GRU) BIBREF0, which solve the gradient vanishing problems in vanilla RNNs.
Compared to RNN, the CNN model also shows competitive performances in some tasks, such as text classification BIBREF13, etc. However, different from RNN, CNN sets a pre-defined convolutional kernel to “summarize” a fixed window of adjacent elements into blended representations, showing its ability of modeling local context.
As both global and local information is important in most of NLP tasks BIBREF14, in this paper, we propose a novel recurrent unit, called Contextual Recurrent Unit (CRU). The proposed CRU model adopts advantages of RNN and CNN, where CNN is good at modeling local context, and RNN is superior in capturing long-term dependencies. We propose three variants of our CRU model: shallow fusion, deep fusion and deep-enhanced fusion.
To verify the effectiveness of our CRU model, we utilize it into two different NLP tasks: sentiment classification and reading comprehension, where the former is sentence-level modeling, and the latter is document-level modeling. In the sentiment classification task, we build a standard neural network and replace the recurrent unit by our CRU model. To further demonstrate the effectiveness of our model, we also tested our CRU in reading comprehension tasks with a strengthened baseline system originated from Attention-over-Attention Reader (AoA Reader) BIBREF10. Experimental results on public datasets show that our CRU model could substantially outperform various systems by a large margin, and set up new state-of-the-art performances on related datasets. The main contributions of our work are listed as follows.
[leftmargin=*]
We propose a novel neural recurrent unit called Contextual Recurrent Unit (CRU), which effectively incorporate the advantage of CNN and RNN. Different from previous works, our CRU model shows its excellent flexibility as GRU and provides better performance.
The CRU model is applied to both sentence-level and document-level modeling tasks and gives state-of-the-art performances.
The CRU could also give substantial improvements in cloze-style reading comprehension task when the baseline system is strengthened by incorporating additional features which will enrich the representations of unknown words and make the texts more readable to the machine.
Related Works
Gated recurrent unit (GRU) has been proposed in the scenario of neural machine translations BIBREF0. It has been shown that the GRU has comparable performance in some tasks compared to the LSTM. Another advantage of GRU is that it has a simpler neural architecture than LSTM, showing a much efficient computation.
However, convolutional neural network (CNN) is not as popular as RNNs in NLP tasks, as the texts are formed temporally. But in some studies, CNN shows competitive performance to the RNN models, such as text classification BIBREF13.
Various efforts have been made on combining CNN and RNN. BIBREF3 proposed an architecture that combines CNN and GRU model with pre-trained word embeddings by word2vec. BIBREF5 proposed to combine asymmetric convolution neural network with the bidirectional LSTM network. BIBREF4 presented Dependency Sensitive CNN, which hierarchically construct text by using LSTMs and extracting features with convolution operations subsequently. BIBREF15 propose to make use of dependency relations information in the shortest dependency path (SDP) by combining CNN and two-channel LSTM units. BIBREF16 build a neural network for dialogue topic tracking where the CNN used to account for semantics at individual utterance and RNN for modeling conversational contexts along multiple turns in history.
The difference between our CRU model and previous works can be concluded as follows.
[leftmargin=*]
Our CRU model could adaptively control the amount of information that flows into different gates, which was not studied in previous works.
Also, the CRU does not introduce a pooling operation, as opposed to other works, such as CNN-GRU BIBREF3. Our motivation is to provide flexibility as the original GRU, while the pooling operation breaks this law (the output length is changed), and it is unable to do exact word-level attention over the output. However, in our CRU model, the output length is the same as the input's and can be easily applied to various tasks where the GRU used to.
We also observed that by only using CNN to conclude contextual information is not strong enough. So we incorporate the original word embeddings to form a "word + context" representation for enhancement.
Our approach
In this section, we will give a detailed introduction to our CRU model. Firstly, we will give a brief introduction to GRU BIBREF0 as preliminaries, and then three variants of our CRU model will be illustrated.
Our approach ::: Gated Recurrent Unit
Gated Recurrent Unit (GRU) is a type of recurrent unit that models sequential data BIBREF0, which is similar to LSTM but is much simpler and computationally effective than the latter one. We will briefly introduce the formulation of GRU. Given a sequence $x = \lbrace x_1, x_2, ..., x_n\rbrace $, GRU will process the data in the following ways. For simplicity, the bias term is omitted in the following equations.
where $z_t$ is the update gate, $r_t$ is the reset gate, and non-linear function $\sigma $ is often chosen as $sigmoid$ function. In many NLP tasks, we often use a bi-directional GRU, which takes both forward and backward information into account.
Our approach ::: Contextual Recurrent Unit
By only modeling word-level representation may have drawbacks in representing the word that has different meanings when the context varies. Here is an example that shows this problem.
There are many fan mails in the mailbox.
There are many fan makers in the factory.
As we can see that, though two sentences share the same beginning before the word fan, the meanings of the word fan itself are totally different when we meet the following word mails and makers. The first fan means “a person that has strong interests in a person or thing", and the second one means “a machine with rotating blades for ventilation". However, the embedding of word fan does not discriminate according to the context. Also, as two sentences have the same beginning, when we apply a recurrent operation (such as GRU) till the word fan, the output of GRU does not change, though they have entirely different meanings when we see the following words.
To enrich the word representation with local contextual information and diminishing the word ambiguities, we propose a model as an extension to the GRU, called Contextual Recurrent Unit (CRU). In this model, we take full advantage of the convolutional neural network and recurrent neural network, where the former is good at modeling local information, and the latter is capable of capturing long-term dependencies. Moreover, in the experiment part, we will also show that our bidirectional CRU could also significantly outperform the bidirectional GRU model.
In this paper, we propose three different types of CRU models: shallow fusion, deep fusion and deep-enhanced fusion, from the most fundamental one to the most expressive one. We will describe these models in detail in the following sections.
Our approach ::: Contextual Recurrent Unit ::: Shallow Fusion
The most simple one is to directly apply a CNN layer after the embedding layer to obtain blended contextual representations. Then a GRU layer is applied afterward. We call this model as shallow fusion, because the CNN and RNN are applied linearly without changing inner architectures of both.
Formally, when given a sequential data $x = \lbrace x_1, x_2, ..., x_n\rbrace $, a shallow fusion of CRU can be illustrated as follows.
We first transform word $x_t$ into word embeddings through an embedding matrix $W_e$. Then a convolutional operation $\phi $ is applied to the context of $e_t$, denoted as $\widetilde{e_t}$, to obtain contextual representations. Finally, the contextual representation $c_t$ is fed into GRU units.
Following BIBREF13, we apply embedding-wise convolution operation, which is commonly used in natural language processing tasks. Let $e_{i:j} \in \mathbb {R}^{\mathcal {\\}j*d}$ denote the concatenation of $j-i+1$ consecutive $d$-dimensional word embeddings.
The embedding-wise convolution is to apply a convolution filter w $\in \mathbb {R}^{\mathcal {\\}k*d}$ to a window of $k$ word embeddings to generate a new feature, i.e., summarizing a local context of $k$ words. This can be formulated as
where $f$ is a non-linear function and $b$ is the bias.
By applying the convolutional filter to all possible windows in the sentence, a feature map $c$ will be generated. In this paper, we apply a same-length convolution (length of the sentence does not change), i.e. $c \in \mathbb {R}^{\mathcal {\\}n*1}$. Then we apply $d$ filters with the same window size to obtain multiple feature maps. So the final output of CNN has the shape of $C \in \mathbb {R}^{\mathcal {\\}n*d}$, which is exactly the same size as $n$ word embeddings, which enables us to do exact word-level attention in various tasks.
Our approach ::: Contextual Recurrent Unit ::: Deep Fusion
The contextual information that flows into the update gate and reset gate of GRU is identical in shallow fusion. In order to let the model adaptively control the amount of information that flows into these gates, we can embed CNN into GRU in a deep manner. We can rewrite the Equation 1 to 3 of GRU as follows.
where $\phi _z, \phi _r, \phi $ are three different CNN layers, i.e., the weights are not shared. When the weights share across these CNNs, the deep fusion will be degraded to shallow fusion.
Our approach ::: Contextual Recurrent Unit ::: Deep-Enhanced Fusion
In shallow fusion and deep fusion, we used the convolutional operation to summarize the context. However, one drawback of them is that the original word embedding might be blurred by blending the words around it, i.e., applying the convolutional operation on its context.
For better modeling the original word and its context, we enhanced the deep fusion model with original word embedding information, with an intuition of “enriching word representation with contextual information while preserving its basic meaning”. Figure FIGREF17 illustrates our motivations.
Formally, the Equation 9 to 11 can be further rewritten into
where we add original word embedding $e_t$ after the CNN operation, to “enhance” the original word information while not losing the contextual information that has learned from CNNs.
Applications
The proposed CRU model is a general neural recurrent unit, so we could apply it to various NLP tasks. As we wonder whether the CRU model could give improvements in both sentence-level modeling and document-level modeling tasks, in this paper, we applied the CRU model to two NLP tasks: sentiment classification and cloze-style reading comprehension. In the sentiment classification task, we build a simple neural model and applied our CRU. In the cloze-style reading comprehension task, we first present some modifications to a recent reading comprehension model, called AoA Reader BIBREF10, and then replace the GRU part by our CRU model to see if our model could give substantial improvements over strong baselines.
Applications ::: Sentiment Classification
In the sentiment classification task, we aim to classify movie reviews, where one movie review will be classified into the positive/negative or subjective/objective category. A general neural network architecture for this task is depicted in Figure FIGREF20.
First, the movie review is transformed into word embeddings. And then, a sequence modeling module is applied, in which we can adopt LSTM, GRU, or our CRU, to capture the inner relations of the text. In this paper, we adopt bidirectional recurrent units for modeling sentences, and then the final hidden outputs are concatenated. After that, a fully connected layer will be added after sequence modeling. Finally, the binary decision is made through a single $sigmoid$ unit.
As shown, we employed a straightforward neural architecture to this task, as we purely want to compare our CRU model against other sequential models. The detailed experimental result of sentiment classification will be given in the next section.
Applications ::: Reading Comprehension
Besides the sentiment classification task, we also tried our CRU model in cloze-style reading comprehension, which is a much complicated task. In this paper, we strengthened the recent AoA Reader BIBREF10 and applied our CRU model to see if we could obtain substantial improvements when the baseline is strengthened.
Applications ::: Reading Comprehension ::: Task Description
The cloze-style reading comprehension is a fundamental task that explores relations between the document and the query. Formally, a general cloze-style query can be illustrated as a triple $\langle {\mathcal {D}}, {\mathcal {Q}}, {\mathcal {A}} \rangle $, where $\mathcal {D}$ is the document, $\mathcal {Q}$ is the query and the answer $\mathcal {A}$. Note that the answer is a single word in the document, which requires us to exploit the relationship between the document and query.
Applications ::: Reading Comprehension ::: Modified AoA Reader
In this section, we briefly introduce the original AoA Reader BIBREF10, and illustrate our modifications. When a cloze-style training triple $\langle \mathcal {D}, \mathcal {Q}, \mathcal {A} \rangle $ is given, the Modified AoA Reader will be constructed in the following steps. First, the document and query will be transformed into continuous representations with the embedding layer and recurrent layer. The recurrent layer can be the simple RNN, GRU, LSTM, or our CRU model.
To further strengthen the representation power, we show a simple modification in the embedding layer, where we found strong empirical results in performance. The main idea is to utilize additional sparse features of the word and add (concatenate) these features to the word embeddings to enrich the word representations. The additional features have shown effective in various models BIBREF7, BIBREF17, BIBREF11. In this paper, we adopt two additional features in document word embeddings (no features applied to the query side).
$\bullet $ Document word frequency: Calculate each document word frequency. This helps the model to pay more attention to the important (more mentioned) part of the document.
$\bullet $ Count of query word: Count the number of each document word appeared in the query. For example, if a document word appears three times in the query, then the feature value will be 3. We empirically find that instead of using binary features (appear=1, otherwise=0) BIBREF17, indicating the count of the word provides more information, suggesting that the more a word occurs in the query, the less possible the answer it will be. We replace the Equation 16 with the following formulation (query side is not changed),
where $freq(x)$ and $CoQ(x)$ are the features that introduced above.
Other parts of the model remain the same as the original AoA Reader. For simplicity, we will omit this part, and the detailed illustrations can be found in BIBREF10.
Experiments: Sentiment Classification ::: Experimental Setups
In the sentiment classification task, we tried our model on the following public datasets.
[leftmargin=*]
MR Movie reviews with one sentence each. Each review is classified into positive or negative BIBREF18.
IMDB Movie reviews from IMDB website, where each movie review is labeled with binary classes, either positive or negative BIBREF19. Note that each movie review may contain several sentences.
SUBJ$^1$ Movie review labeled with subjective or objective BIBREF20.
The statistics and hyper-parameter settings of these datasets are listed in Table TABREF33.
As these datasets are quite small and overfit easily, we employed $l_2$-regularization of 0.0001 to the embedding layer in all datasets. Also, we applied dropout BIBREF21 to the output of the embedding layer and fully connected layer. The fully connected layer has a dimension of 1024. In the MR and SUBJ, the embedding layer is initialized with 200-dimensional GloVe embeddings (trained on 840B token) BIBREF22 and fine-tuned during the training process. In the IMDB condition, the vocabulary is truncated by descending word frequency order. We adopt batched training strategy of 32 samples with ADAM optimizer BIBREF23, and clipped gradient to 5 BIBREF24. Unless indicated, the convolutional filter length is set to 3, and ReLU for the non-linear function of CNN in all experiments. We use 10-fold cross-validation (CV) in the dataset that has no train/valid/test division.
Experiments: Sentiment Classification ::: Results
The experimental results are shown in Table TABREF35. As we mentioned before, all RNNs in these models are bi-directional, because we wonder if our bi-CRU could still give substantial improvements over bi-GRU which could capture both history and future information. As we can see that, all variants of our CRU model could give substantial improvements over the traditional GRU model, where a maximum gain of 2.7%, 1.0%, and 1.9% can be observed in three datasets, respectively. We also found that though we adopt a straightforward classification model, our CRU model could outperform the state-of-the-art systems by 0.6%, 0.7%, and 0.8% gains respectively, which demonstrate its effectiveness. By employing more sophisticated architecture or introducing task-specific features, we think there is still much room for further improvements, which is beyond the scope of this paper.
When comparing three variants of the CRU model, as we expected, the CRU with deep-enhanced fusion performs best among them. This demonstrates that by incorporating contextual representations with original word embedding could enhance the representation power. Also, we noticed that when we tried a larger window size of the convolutional filter, i.e., 5 in this experiment, does not give a rise in the performance. We plot the trends of MR test set accuracy with the increasing convolutional filter length, as shown in Figure FIGREF36.
As we can see that, using a smaller convolutional filter does not provide much contextual information, thus giving a lower accuracy. On the contrary, the larger filters generally outperform the lower ones, but not always. One possible reason for this is that when the filter becomes larger, the amortized contextual information is less than a smaller filter, and make it harder for the model to learn the contextual information. However, we think the proper size of the convolutional filter may vary task by task. Some tasks that require long-span contextual information may benefit from a larger filter.
We also compared our CRU model with related works that combine CNN and RNN BIBREF3, BIBREF4, BIBREF5. From the results, we can see that our CRU model significantly outperforms previous works, which demonstrates that by employing deep fusion and enhancing the contextual representations with original embeddings could substantially improve the power of word representations.
On another aspect, we plot the trends of IMDB test set accuracy during the training process, as depicted in Figure FIGREF37. As we can see that, after iterating six epochs of training data, all variants of CRU models show faster convergence speed and smaller performance fluctuation than the traditional GRU model, which demonstrates that the proposed CRU model has better training stability.
Experiments: Reading Comprehension ::: Experimental Setups
We also tested our CRU model in the cloze-style reading comprehension task. We carried out experiments on the public datasets: CBT NE/CN BIBREF25. The CRU model used in these experiments is the deep-enhanced type with the convolutional filter length of 3. In the re-ranking step, we also utilized three features: Global LM, Local LM, Word-class LM, as proposed by BIBREF10, and all LMs are 8-gram trained by SRILM toolkit BIBREF27. For other settings, such as hyperparameters, initializations, etc., we closely follow the experimental setups as BIBREF10 to make the experiments more comparable.
Experiments: Reading Comprehension ::: Results
The overall experimental results are given in Table TABREF38. As we can see that our proposed models can substantially outperform various state-of-the-art systems by a large margin.
[leftmargin=*]
Overall, our final model (M-AoA Reader + CRU + Re-ranking) could give significant improvements over the previous state-of-the-art systems by 2.1% and 1.4% in test sets, while re-ranking and ensemble bring further improvements.
When comparing M-AoA Reader to the original AoA Reader, 1.8% and 0.4% improvements can be observed, suggesting that by incorporating additional features into embedding can enrich the power of word representation. Incorporating more additional features in the word embeddings would have another boost in the results, but we leave this in future work.
Replacing GRU with our CRU could significantly improve the performance, where 1.6% and 1.5% gains can be obtained when compared to M-AoA Reader. This demonstrates that incorporating contextual information when modeling the sentence could enrich the representations. Also, when modeling an unknown word, except for its randomly initialized word embedding, the contextual information could give a possible guess of the unknown word, making the text more readable to the neural networks.
The re-ranking strategy is an effective approach in this task. We observed that the gains in the common noun category are significantly greater than the named entity. One possible reason is that the language model is much beneficial to CN than NE, because it is much more likely to meet a new named entity that is not covered in the training data than the common noun.
Qualitative Analysis
In this section, we will give a qualitative analysis on our proposed CRU model in the sentiment classification task. We focus on two categories of the movie reviews, which is quite harder for the model to judge the correct sentiment. The first one is the movie review that contains negation terms, such as “not”. The second type is the one contains sentiment transition, such as “clever but not compelling”. We manually select 50 samples of each category in the MR dataset, forming a total of 100 samples to see if our CRU model is superior in handling these movie reviews. The results are shown in Table TABREF45. As we can see that, our CRU model is better at both categories of movie review classification, demonstrating its effectiveness.
Among these samples, we select an intuitive example that the CRU successfully captures the true meaning of the sentence and gives the correct sentiment label. We segment a full movie review into three sentences, which is shown in Table TABREF46.
Regarding the first and second sentence, both models give correct sentiment prediction. While introducing the third sentence, the GRU baseline model failed to recognize this review as a positive sentiment because there are many negation terms in the sentence. However, our CRU model could capture the local context during the recurrent modeling the sentence, and the phrases such as “not making fun” and “not laughing at” could be correctly noted as positive sentiment which will correct the sentiment category of the full review, suggesting that our model is superior at modeling local context and gives much accurate meaning.
Conclusion
In this paper, we proposed an effective recurrent model for modeling sequences, called Contextual Recurrent Units (CRU). We inject the CNN into GRU, which aims to better model the local context information via CNN before recurrently modeling the sequence. We have tested our CRU model on the cloze-style reading comprehension task and sentiment classification task. Experimental results show that our model could give substantial improvements over various state-of-the-art systems and set up new records on the respective public datasets. In the future, we plan to investigate convolutional filters that have dynamic lengths to adaptively capture the possible spans of its context. | CBT NE/CN, MR Movie reviews, IMDB Movie reviews, SUBJ |
63d9b12dc3ff3ceb1aed83ce11371bca8aac4e8f | 63d9b12dc3ff3ceb1aed83ce11371bca8aac4e8f_0 | Q: So we do not use pre-trained embedding in this case?
Text: Introduction
Encoder-decoder models BIBREF0 are effective in tasks such as machine translation ( BIBREF1 , BIBREF1 ; BIBREF2 , BIBREF2 ) and grammatical error correction BIBREF3 . Vocabulary in encoder-decoder models is generally selected from the training corpus in descending order of frequency, and low-frequency words are replaced with an unknown word token <unk>. The so-called out-of-vocabulary (OOV) words are replaced with <unk> to not increase the decoder's complexity and to reduce noise. However, naive frequency-based OOV replacement may lead to loss of information that is necessary for modeling context in the encoder.
This study hypothesizes that vocabulary constructed using unigram frequency includes words that interfere with learning in encoder-decoder models. That is, we presume that vocabulary selection that considers co-occurrence information selects fewer noisy words for learning robust encoders in encoder-decoder models. We apply the hyperlink-induced topic search (HITS) algorithm to extract the co-occurrence relations between words. Intuitively, the removal of words that rarely co-occur with others yields better encoder models than ones that include noisy low-frequency words.
This study examines two tasks, machine translation (MT) and grammatical error correction (GEC) to confirm the effect of decreasing noisy words, with a focus on the vocabulary of the encoder side, because the vocabulary on the decoder side is relatively limited. In a Japanese-to-English MT experiment, our method achieves a BLEU score that is 0.56 points more than that of the frequency-based method. Further, it outperforms the frequency-based method for English GEC, with an $\mathrm {F_{0.5}}$ -measure that is 1.48 points higher.
The main contributions of this study are as follows:
Related Work
There is currently a growing interest in applying neural models to MT ( BIBREF0 , BIBREF0 ; BIBREF1 , BIBREF1 ; BIBREF2 , BIBREF2 ; BIBREF4 , BIBREF4 ) and GEC ( BIBREF3 , BIBREF3 ; BIBREF5 , BIBREF5 ; BIBREF6 , BIBREF6 ); hence, this study focuses on improving the simple attentional encoder-decoder models that are applied to these tasks.
In the investigation of vocabulary restriction in neural models, BIBREF7 applied byte pair encoding to words and created a partial character string set that could express all the words in the training data. They increased the number of words included in the vocabulary to enable the encoder-decoder model to robustly learn contextual information. In contrast, we aim to improve neural models by using vocabulary that is appropriate for a training corpus—not to improve neural models by increasing their vocabulary.
BIBREF8 proposed a method of replacing and copying an unknown word token with a bilingual dictionary in neural MT. They automatically constructed a translation dictionary from a training corpus using a word-alignment model (GIZA++), which finds a corresponding source word for each unknown target word token. They replaced the unknown word token with the corresponding word into which the source word was translated by the bilingual dictionary. BIBREF3 used a similar method for neural GEC. Because our proposed method is performed as preprocessing, it can be used simultaneously with this replace-and-copy method.
Algorithms that rank words using co-occurrence are employed in many natural language processing tasks. For example, TextRank BIBREF9 uses PageRank BIBREF10 for keyword extraction. TextRank constructs a word graph in which nodes represent words, and edges represent co-occurrences between words within a fixed window; TextRank then executes the PageRank algorithm to extract keywords. Although this is an unsupervised method, it achieves nearly the same precision as one state-of-the-art supervised method BIBREF11 . BIBREF12 used HITS BIBREF13 to select seeds and create a stop list for bootstrapping in natural language processing. They reported significant improvements over a baseline method using unigram frequency. Their graph-based algorithm was effective at extracting the relevance between words, which cannot be grasped with a simple unigram frequency. In this study, we use HITS to retrieve co-occurring words from a training corpus to reduce noise in the source text.
[t] HITS [1] hubness vector $i_0$ adjacency matrix $A$ iteration number $\tau $ hubness vector $i$ authority vector $p$ HITS $i_0$ , $A$ , $\tau $ ${i} \leftarrow {i_0}$ $t = 1, 2, ..., \tau $ $A$0 $A$1 normalize $A$2 and $A$3 $A$4 and $A$5
Hubness and authority scores from HITS
HITS, which is a web page ranking algorithm proposed by BIBREF13 , computes hubness and authority scores for a web page (node) using the adjacency matrix that represents the web page's link (edge) transitions. A web page with high authority is linked from a page with high hubness scores, and a web page with a high hubness score links to a page with a high authority score. Algorithm "Related Work" shows pseudocode for the HITS algorithm. Hubness and authority scores converge by setting the iteration number $\tau $ to a sufficiently large value.
Vocabulary selection using HITS
In this study, we create an adjacency matrix from a training corpus by considering a word as a node and the co-occurrence between words as an edge. Unlike in web pages, co-occurrence between words is nonbinary; therefore, several co-occurrence measures can be used as edge weights. Section 3.3 describes the co-occurrence measures and the context in which co-occurrence is defined.
The HITS algorithm is executed using the adjacency matrix created in the way described above. As a result, it is possible to obtain a score indicating importance of each word while considering contextual information in the training corpus.
Figure 1 shows a word graph example. A word that obtains a high score in the HITS algorithm is considered to co-occur with a variety of words. Figure 1 demonstrates that second order co-occurrence scores (the scores of words co-occurring with words that co-occur with various words BIBREF14 ) are also high.
In this study, words with high hubness scores are considered to co-occur with an important word, and low-scoring words are excluded from the vocabulary. Using this method appears to generate a vocabulary that includes words that are more suitable for representing a context vector for encoder models.
Word graph construction
To acquire co-occurrence relations, we use a combination of each word and its peripheral words. Specifically, we combine the target word with surrounding words within window width $N$ and count the occurrences. When defining the context in this way, because the adjacency matrix becomes symmetric, the same hubness and authority scores can be obtained. Figure 2 shows an example of co-occurrence in which $N$ is set to two. [2]In this study, singleton words and their co-occurrences are excluded from the graph.
We use raw co-occurrence frequency (Freq) and positive pointwise mutual information (PPMI) between words as the ( $x, y$ ) element $A_{xy}$ of the adjacency matrix. However, naive PPMI reacts sensitively to low-frequency words in a training corpus. To account for high-frequency, we weight the PMI by the logarithm of the number of co-occurrences and use PPMI based on this weighted PMI (Equation ).
$$ {\hspace{-182.09763pt}A_{xy}^{freq} = |x, y| }\\ A_{xy}^{ppmi} = \mathrm {max}(0, \mathrm {pmi}(x, y) + \log _2|x, y|)$$ (Eq. 8)
Equation 9 is the PMI of target word $x$ and co-occurrence word $y$ . $M$ is the number of tokens of the combination, $|x, *|$ and $|*, y| $ are the number of token combinations when fixing target word $x$ and co-occurrence word $y$ , respectively.
$$ \mathrm {pmi}(x, y) = \log _2 \frac{M \cdot |x, y|}{|x, *||*, y|}$$ (Eq. 9)
Experimental setting
In the first experiment, we conduct a Japanese-to-English translation using the Asian Scientific Paper Excerpt Corpus (ASPEC; BIBREF15 , BIBREF15 ). We follow the official split of the train, development, and test sets. As training data, we use only the first 1.5 million sentences sorted by sentence alignment confidence to obtain a Japanese–English parallel corpus (sentences of more than 60 words are excluded). Our training set consists of 1,456,278 sentences, development set consists of 1,790 sentences, and test set consists of 1,812 sentences. The training set has 247,281 Japanese word types and 476,608 English word types.
The co-occurrence window width $N$ is set to two. For combinations that co-occurred only once within the training corpus, we set the value of element $A_{xy}$ of the adjacency matrix to zero. The iteration number $\tau $ of the HITS algorithm is set to 300. As mentioned in Section 1, we only use the proposed method on the encoder side.
For this study's neural MT model, we implement global dot attention BIBREF16 . We train a baseline model that uses vocabulary that is determined by its frequency in the training corpus. Vocabulary size is set to 100K on the encoder side and 50K on the decoder side. Additionally, we conduct an experiment of varying vocabulary size of the encoder to 50K in the baseline and PPMI to investigate the effect of vocabulary size. Unless otherwise noted, we conduct an analysis of the model using the vocabulary size of 100K. The number of dimensions for each of the hidden and embedding layers is 512. The mini-batch size is 150. AdaGrad is used as an optimization method with an initial learning rate of 0.01. Dropout is applied with a probability of 0.2.
For this experiment, a bilingual dictionary is prepared for postprocessing unknown words BIBREF8 . When the model outputs an unknown word token, the word with the highest attention score is used as a query to replace the unknown token with the corresponding word from the dictionary. If not in the dictionary, we replace the unknown word token with the source word (unk_rep). This dictionary is created based on word alignment obtained using fast_align BIBREF17 on the training corpus.
We evaluate translation results using BLEU scores BIBREF18 .
[4]BLEU score for postprocessing (unk_rep) improves by 0.46, 0.44, and 0.46 points in the baseline, Freq, and PPMI, respectively.
The second experiment addresses GEC. We combine the FCE public dataset BIBREF20 , NUCLE corpus BIBREF21 , and English learner corpus from the Lang-8 learner corpus BIBREF22 and remove sentences longer than 100 words to create a training corpus. From the Lang-8 learner corpus, we use only the pairs of erroneous and corrected sentences. We use 1,452,584 sentences as a training set (502,908 types on the encoder side and 639,574 types on the decoder side). We evaluate the models' performances on the standard sets from the CoNLL-14 shared task BIBREF23 using CoNLL-13 data as a development set (1,381 sentences) and CoNLL-14 data as a test set (1,312 sentences). We employ $\mathrm {F_{0.5}}$ as an evaluation measure for the CoNLL-14 shared task.
We use the same model as in Section 4.1 as a neural model for GEC. The models' parameter settings are similar to the MT experiment, except for the vocabulary and batch sizes. In this experiment, we set the vocabulary size on the encoder and decoder sides to 150K and 50K, respectively. Additionally, we conduct the experiment of changing vocabulary size of the encoder to 50K to investigate the effect of the vocabulary size. Unless otherwise noted, we conduct an analysis of the model using the vocabulary size of 150K. The mini-batch size is 100.
Results
Table 1 shows the translation accuracy (BLEU scores) and $p$ -value of a significance test ( $p < 0.05$ ) by bootstrap resampling BIBREF19 . The PPMI model improves translation accuracy by 0.56 points in Japanese-to-English translation, which is a significant improvement.
Next, we examine differences in vocabulary by comparing each model with the baseline. Compared to the vocabulary of the baseline in 100K setting, Freq and PPMI replace 16,107 and 17,166 types, respectively; compared to the vocabulary of the baseline in 50K setting, PPMI replaces 4,791 types.
Analysis
According to Table 1 , the performance of Freq is almost the same as that of the baseline. When examining the differences in selected words in vocabulary between PPMI and Freq, we find that PPMI selects more low-frequency words in the training corpus compared to Freq, because PPMI deals with not only frequency but also co-occurrence.
The effect of unk_rep is almost the same in the baseline as in the proposed method, which indicates that the proposed method can be combined with other schemes as a preprocessing step.
As a comparison of the vocabulary size 50K and 100K, the BLEU score of 100K is higher than that of 50K in PPMI. Moreover, the BLEU scores are almost the same in the baseline. We suppose that the larger the vocabulary size of encoder, the more noisy words the baseline includes, while the PPMI filters these words. That is why the proposed method works well in the case where the vocabulary size is large.
To examine the effect of changing the vocabulary on the source side, the test set is divided into two subsets: COMMON and DIFF. The former (1,484 sentences) consists of only the common vocabulary between the baseline and PPMI, whereas the latter (328 sentences) includes at least one word excluded from the common vocabulary.
Table 2 shows the translation accuracy of the COMMON and DIFF outputs. Translation performance of both corpora is improved.
In order to observe how PPMI improves COMMON outputs, we measure the similarity of the baseline and PPMI output sentences by counting the exact same sentences. In the COMMON outputs, 72 sentence pairs (4.85%) are the same, whereas 9 sentence pairs are the same in the DIFF outputs (2.74%). Surprisingly, even though it uses the same vocabulary, PPMI often outputs different but fluent sentences.
Table 3 shows an example of Japanese-to-English translation. The outputs of the proposed method (especially PPMI) are improved, despite the source sentence being expressed with common vocabulary; this is because the proposed method yielded a better encoder model than the baseline.
The $\mathrm {F_{0.5}}$ of the baseline is almost the same while the PPMI model improves the score in the case where the vocabulary size increases. Similar to MT, we suppose that the PPMI filters noisy words.
As in Section 4.3, we perform a follow-up experiment using two data subsets: COMMON and DIFF, which contain 1,072 and 240 sentences, respectively.
Table 5 shows the accuracy of the error correction of the COMMON and DIFF outputs. Precision increases by 11.81 points, whereas recall remains the same for the COMMON outputs.
In GEC, approximately 20% of COMMON's output pairs differ, which is caused by the differences in the training environment. Unlike MT, we can copy OOV in the target sentence from the source sentence without loss of fluency; therefore, our model has little effect on recall, whereas its precision improves because of noise reduction.
Table 6 shows an example of GEC. The proposed method's output improves when the source sentence is expressed using common vocabulary.
Result
Table 4 shows the performance of the baseline and proposed method. The PPMI model improves precision and recall; it achieves a $\mathrm {F_{0.5}}$ -measure 1.48 points higher than the baseline method.
In setting the vocabulary size of encoder to 150K, PPMI replaces 37,185 types from the baseline; in the 50K setting, PPMI replaces 10,203 types.
Discussion
We described that the proposed method has a positive effect on learning the encoder. However, we have a question; what affects the performance? We conduct an analysis of this question in this section.
First, we count the occurrence of the words included only in the baseline or PPMI in the training corpus. We also show the number of the tokens per types (“Ave. tokens”) included only in either the baseline or PPMI vocabulary.
The result is shown in Table 7 . We find that the proposed method uses low-frequency words instead of high-frequency words in the training corpus. This result suggests that the proposed method works well despite the fact that the encoder of the proposed method encounters more <unk> than the baseline. This is because the proposed method excludes words that may interfere with the learning of encoder-decoder models.
Second, we conduct an analysis of the POS of the words in GEC to find why increasing OOV improves the learning of encoder-decoder models. Specifically, we apply POS tagging to the training corpus and calculate the occurrence of the POS of the words only included in the baseline or PPMI. We use NLTK as a POS tagger.
Table 8 shows the result. It is observed that NOUN is the most affected POS by the proposed method and becomes often represented by <unk>. NOUN words in the vocabulary of the baseline contain some non-English words, such as Japanese or Korean. These words should be treated as OOV but the baseline fails to exclude them using only the frequency. According to Table 8 , NUM is also affected by the proposed method. NUM words of the baseline include a simple numeral such as “119”, in addition to incorrectly segmented numerals such as “514&objID”. This word appears 25 times in the training corpus owing to the noisy nature of Lang-8. We suppose that the proposed method excludes these noisy words and has a positive effect on training.
Conclusion
In this paper, we proposed an OOV filtering method, which considers word co-occurrence information for encoder-decoder models. Unlike conventional OOV handling, this graph-based method selects the words that are more suitable for learning encoder models by considering contextual information. This method is effective for not only machine translation but also grammatical error correction.
This study employed a symmetric matrix (similar to skip-gram with negative sampling) to express relationships between words. In future research, we will develop this method by using vocabulary obtained by designing an asymmetric matrix to incorporate syntactic relations.
Acknowledgments
We thank Yangyang Xi of Lang-8, Inc. for allowing us to use the Lang-8 learner corpus. We also thank Masahiro Kaneko and anonymous reviewers for their insightful comments. | Yes |
0bd864f83626a0c60f5e96b73fb269607afc7c09 | 0bd864f83626a0c60f5e96b73fb269607afc7c09_0 | Q: How are sentence embeddings incorporated into the speech recognition system?
Text: Introduction
In a long conversation, there exists a tendency of semantically related words, or phrases reoccur across sentences, or there exists topical coherence. Existing speech recognition systems are built at individual, isolated utterance level in order to make building systems computationally feasible. However, this may lose important conversational context information. There have been many studies that have attempted to inject a longer context information BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , all of these models are developed on text data for language modeling task.
There has been recent work attempted to use the conversational-context information within a end-to-end speech recognition framework BIBREF6 , BIBREF7 , BIBREF8 . The new end-to-end speech recognition approach BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 integrates all available information within a single neural network model, allows to make fusing conversational-context information possible. However, these are limited to encode only one preceding utterance and learn from a few hundred hours of annotated speech corpus, leading to minimal improvements.
Meanwhile, neural language models, such as fastText BIBREF17 , BIBREF18 , BIBREF19 , ELMo BIBREF20 , OpenAI GPT BIBREF21 , and Bidirectional Encoder Representations from Transformers (BERT) BIBREF22 , that encode words and sentences in fixed-length dense vectors, embeddings, have achieved impressive results on various natural language processing tasks. Such general word/sentence embeddings learned on large text corpora (i.e., Wikipedia) has been used extensively and plugged in a variety of downstream tasks, such as question-answering and natural language inference, BIBREF22 , BIBREF20 , BIBREF23 , to drastically improve their performance in the form of transfer learning.
In this paper, we create a conversational-context aware end-to-end speech recognizer capable of incorporating a conversational-context to better process long conversations. Specifically, we propose to exploit external word and/or sentence embeddings which trained on massive amount of text resources, (i.e. fastText, BERT) so that the model can learn better conversational-context representations. So far, the use of such pre-trained embeddings have found limited success in the speech recognition task. We also add a gating mechanism to the decoder network that can integrate all the available embeddings (word, speech, conversational-context) efficiently with increase representational power using multiplicative interactions. Additionally, we explore a way to train our speech recognition model even with text-only data in the form of pre-training and joint-training approaches. We evaluate our model on the Switchboard conversational speech corpus BIBREF24 , BIBREF25 , and show that our model outperforms the sentence-level end-to-end speech recognition model. The main contributions of our work are as follows:
Related work
Several recent studies have considered to incorporate a context information within a end-to-end speech recognizer BIBREF26 , BIBREF27 . In contrast with our method which uses a conversational-context information in a long conversation, their methods use a list of phrases (i.e. play a song) in reference transcription in specific tasks, contact names, songs names, voice search, dictation.
Several recent studies have considered to exploit a longer context information that spans multiple sentences BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . In contrast with our method which uses a single framework for speech recognition tasks, their methods have been developed on text data for language models, and therefore, it must be integrated with a conventional acoustic model which is built separately without a longer context information.
Several recent studies have considered to embed a longer context information within a end-to-end framework BIBREF6 , BIBREF7 , BIBREF8 . In contrast with our method which can learn a better conversational-context representation with a gated network that incorporate external word/sentence embeddings from multiple preceding sentence history, their methods are limited to learn conversational-context representation from one preceding sentence in annotated speech training set.
Gating-based approaches have been used for fusing word embeddings with visual representations in genre classification task or image search task BIBREF28 , BIBREF29 and for learning different languages in speech recognition task BIBREF30 .
Joint CTC/Attention-based encoder-decoder network
We perform end-to-end speech recognition using a joint CTC/Attention-based approach with graphemes as the output symbols BIBREF16 , BIBREF31 . The key advantage of the joint CTC/Attention framework is that it can address the weaknesses of the two main end-to-end models, Connectionist Temporal Classification (CTC) BIBREF9 and attention-based encoder-decoder (Attention) BIBREF32 , by combining the strengths of the two. With CTC, the neural network is trained according to a maximum-likelihood training criterion computed over all possible segmentations of the utterance's sequence of feature vectors to its sequence of labels while preserving left-right order between input and output. With attention-based encoder-decoder models, the decoder network can learn the language model jointly without relying on the conditional independent assumption.
Given a sequence of acoustic feature vectors, $\mathbf {x}$ , and the corresponding graphemic label sequence, $\mathbf {y}$ , the joint CTC/Attention objective is represented as follows by combining two objectives with a tunable parameter $\lambda : 0 \le \lambda \le 1$ :
$$\mathcal {L} &= \lambda \mathcal {L}_\text{CTC} + (1-\lambda ) \mathcal {L}_\text{att}.$$ (Eq. 6)
Each loss to be minimized is defined as the negative log likelihood of the ground truth character sequence $\mathbf {y^*}$ , is computed from:
$$\begin{split} \mathcal {L}_\text{CTC} \triangleq & -\ln \sum _{\mathbf {\pi } \in \Phi (\mathbf {y})} p(\mathbf {\pi }|\mathbf {x}) \end{split}$$ (Eq. 7)
$$\begin{split} \mathcal {L}_\text{att} \triangleq & -\sum _u \ln p(y_u^*|\mathbf {x},y^*_{1:u-1}) \end{split}$$ (Eq. 8)
where $\mathbf {\pi }$ is the label sequence allowing the presence of the blank symbol, $\Phi $ is the set of all possible $\mathbf {\pi }$ given $u$ -length $\mathbf {y}$ , and $y^*_{1:u-1}$ is all the previous labels.
Both CTC and the attention-based encoder-decoder networks are also used in the inference step. The final hypothesis is a sequence that maximizes a weighted conditional probability of CTC and attention-based encoder-decoder network BIBREF33 :
$$\begin{split} \mathbf {y}* = \text{argmax} \lbrace & \gamma \log p_{CTC}(\mathbf {y}|\mathbf {x}) \\ &+ (1-\gamma ) \log p_{att}(\mathbf {y}|\mathbf {x}) \rbrace \end{split}$$ (Eq. 9)
Acoustic-to-Words Models
In this work, we use word units as our model outputs instead of sub-word units. Direct acoustics-to-word (A2W) models train a single neural network to directly recognize words from speech without any sub-word units, pronunciation model, decision tree, decoder, which significantly simplifies the training and decoding process BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 , BIBREF38 . In addition, building A2W can learn more semantically meaningful conversational-context representations and it allows to exploit external resources like word/sentence embeddings where the unit of representation is generally words. However, A2W models require more training data compared to conventional sub-word models because it needs sufficient acoustic training examples per word to train well and need to handle out-of-vocabulary(OOV) words. As a way to manage this OOV issue, we first restrict the vocabulary to 10k frequently occurring words. We then additionally use a single character unit and start-of-OOV (sunk), end-of-OOV (eunk) tokens to make our model generate a character by decomposing the OOV word into a character sequence. For example, the OOV word, rainstorm, is decomposed into (sunk) r a i n s t o r m (eunk) and the model tries to learn such a character sequence rather than generate the OOV token. From this method, we obtained 1.2% - 3.7% word error rate (WER) relative improvements in evaluation set where exists 2.9% of OOVs.
Conversational-context Aware Models
In this section, we describe the A2W model with conversational-context fusion. In order to fuse conversational context information within the A2W, end-to-end speech recognition framework, we extend the decoder sub-network to predict the output additionally conditioning on conversational context, by learning a conversational-context embedding. We encode single or multiple preceding utterance histories into a fixed-length, single vector, then inject it to the decoder network as an additional input at every output step.
Let say we have $K$ number of utterances in a conversation. For $k$ -th sentence, we have acoustic features $(x_1, \cdots , x_T)^k$ and output word sequence, $(w_1, \cdots , w_U)$ . At output timestamp $u$ , our decoder generates the probability distribution over words ( $w_u^k$ ), conditioned on 1) speech embeddings, attended high-level representation ( $\mathbf {e_{speech}^{k}}$ ) generated from encoder, and 2) word embeddings from all the words seen previously ( $e^{u-1}_{word}$ ), and 3) conversational-context embeddings ( $e^{k}_{context}$ ), which represents the conversational-context information for current ( $k$ ) utterance prediction:
$$\mathbf {e^{k}_{speech}} = & \text{Encoder}(\mathbf {x^k}) \\ w^k_u \sim & \text{Decoder}(\mathbf {e^{k}_{context}}, e^k_{word}, \mathbf {e^{k}_{speech}})$$ (Eq. 11)
We can simply represent such contextual embedding, $e^{k}_{context}$ , by mean of one-hot word vectors or word distributions, $\texttt {mean}(e^{k-1}_{word_{1}} + \cdots + e^{k-1}_{word_{U}})$ from the preceding utterances.
In order to learn and use the conversational-context during training and decoding, we serialize the utterances based on their onset times and their conversations rather than random shuffling of data. We shuffle data at the conversation level and create mini-batches that contain only one sentence of each conversation. We fill the "dummy" input/output example at positions where the conversation ended earlier than others within the mini-batch to not influence other conversations while passing context to the next batch.
External word/sentence embeddings
Learning better representation of conversational-context is the key to achieve better processing of long conversations. To do so, we propose to encode the general word/sentence embeddings pre-trained on large textual corpora within our end-to-end speech recognition framework. Another advantage of using pre-trained embedding models is that we do not need to back-propagate the gradients across contexts, making it easier and faster to update the parameters for learning a conversational-context representation.
There exist many word/sentence embeddings which are publicly available. We can broadly classify them into two categories: (1) non-contextual word embeddings, and (2) contextual word embeddings. Non-contextual word embeddings, such as Word2Vec BIBREF1 , GloVe BIBREF39 , fastText BIBREF17 , maps each word independently on the context of the sentence where the word occur in. Although it is easy to use, it assumes that each word represents a single meaning which is not true in real-word. Contextualized word embeddings, sentence embeddings, such as deep contextualized word representations BIBREF20 , BERT BIBREF22 , encode the complex characteristics and meanings of words in various context by jointly training a bidirectional language model. The BERT model proposed a masked language model training approach enabling them to also learn good “sentence” representation in order to predict the masked word.
In this work, we explore both types of embeddings to learn conversational-context embeddings as illustrated in Figure 1 . The first method is to use word embeddings, fastText, to generate 300-dimensional embeddings from 10k-dimensional one-hot vector or distribution over words of each previous word and then merge into a single context vector, $e^k_{context}$ . Since we also consider multiple word/utterance history, we consider two simple ways to merge multiple embeddings (1) mean, and (2) concatenation. The second method is to use sentence embeddings, BERT. It is used to a generate single 786-dimensional sentence embedding from 10k-dimensional one-hot vector or distribution over previous words and then merge into a single context vector with two different merging methods. Since our A2W model uses a restricted vocabulary of 10k as our output units and which is different from the external embedding models, we need to handle out-of-vocabulary words. For fastText, words that are missing in the pretrained embeddings we map them to a random multivariate normal distribution with the mean as the sample mean and variance as the sample variance of the known words. For BERT, we use its provided tokenizer to generates byte pair encodings to handle OOV words.
Using this approach, we can obtain a more dense, informative, fixed-length vectors to encode conversational-context information, $e^k_{context}$ to be used in next $k$ -th utterance prediction.
Contextual gating
We use contextual gating mechanism in our decoder network to combine the conversational-context embeddings with speech and word embeddings effectively. Our gating is contextual in the sense that multiple embeddings compute a gate value that is dependent on the context of multiple utterances that occur in a conversation. Using these contextual gates can be beneficial to decide how to weigh the different embeddings, conversational-context, word and speech embeddings. Rather than merely concatenating conversational-context embeddings BIBREF6 , contextual gating can achieve more improvement because its increased representational power using multiplicative interactions.
Figure 2 illustrates our proposed contextual gating mechanism. Let $e_w = e_w(y_{u-1})$ be our previous word embedding for a word $y_{u-1}$ , and let $e_s = e_s(x^k_{1:T})$ be a speech embedding for the acoustic features of current $k$ -th utterance $x^k_{1:T}$ and $e_c = e_c(s_{k-1-n:k-1})$ be our conversational-context embedding for $n$ -number of preceding utterances ${s_{k-1-n:k-1}}$ . Then using a gating mechanism:
$$g = \sigma (e_c, e_w, e_s)$$ (Eq. 15)
where $\sigma $ is a 1 hidden layer DNN with $\texttt {sigmoid}$ activation, the gated embedding $e$ is calcuated as
$$e = g \odot (e_c, e_w, e_s) \\ h = \text{LSTM}(e)$$ (Eq. 16)
and fed into the LSTM decoder hidden layer. The output of the decoder $h$ is then combined with conversational-context embedding $e_c$ again with a gating mechanism,
$$g = \sigma (e_C, h) \\ \hat{h} = g \odot (e_c, h)$$ (Eq. 17)
Then the next hidden layer takes these gated activations, $\hat{h}$ , and so on.
Datasets
To evaluate our proposed conversational end-to-end speech recognition model, we use the Switchboard (SWBD) LDC corpus (97S62) task. We split 300 hours of the SWBD training set into two: 285 hours of data for the model training, and 5 hours of data for the hyper-parameter tuning. We evaluate the model performance on the HUB5 Eval2000 which consists of the Callhome English (CH) and Switchboard (SWBD) (LDC2002S09, LDC2002T43). In Table 1 , we show the number of conversations and the average number of utterances per a single conversation.
The audio data is sampled at 16kHz, and then each frame is converted to a 83-dimensional feature vector consisting of 80-dimensional log-mel filterbank coefficients and 3-dimensional pitch features as suggested in BIBREF40 . The number of our word-level output tokens is 10,038, which includes 47 single character units as described in Section "Acoustic-to-Words Models" . Note that no pronunciation lexicon was used in any of the experiments.
Training and decoding
For the architecture of the end-to-end speech recognition, we used joint CTC/Attention end-to-end speech recognition BIBREF16 , BIBREF31 . As suggested in BIBREF45 , BIBREF33 , the input feature images are reduced to ( $1/4 \times 1/4$ ) images along with the time-frequency axis within the two max-pooling layers in CNN. Then, the 6-layer BLSTM with 320 cells is followed by the CNN layer. For the attention mechanism, we used a location-based method BIBREF14 . For the decoder network, we used a 2-layer LSTM with 300 cells. In addition to the standard decoder network, our proposed models additionally require extra parameters for gating layers in order to fuse conversational-context embedding to the decoder network compared to baseline. We denote the total number of trainable parameters in Table 2 .
For the optimization method, we use AdaDelta BIBREF46 with gradient clipping BIBREF47 . We used $\lambda = 0.2$ for joint CTC/Attention training (in Eq. 6 ) and $\gamma = 0.3$ for joint CTC/Attention decoding (in Eq. 9 ). We bootstrap the training of our proposed conversational end-to-end models from the baseline end-to-end models. To decide the best models for testing, we monitor the development accuracy where we always use the model prediction in order to simulate the testing scenario. At inference, we used a left-right beam search method BIBREF48 with the beam size 10 for reducing the computational cost. We adjusted the final score, $s(\mathbf {y}|\mathbf {x})$ , with the length penalty $0.5$ . The models are implemented using the PyTorch deep learning library BIBREF49 , and ESPnet toolkit BIBREF16 , BIBREF31 , BIBREF50 .
Results
Our results are summarized in the Table 2 where we first present the baseline results and then show the improvements by adding each of the individual components that we discussed in previous sections, namely, gated decoding, pretraining decoder network, external word embedding, external conversational embedding and increasing receptive field of the conversational context. Our best model gets around 15% relative improvement on the SWBD subset and 5% relative improvement on the CallHome subset of the eval2000 dataset.
We start by evaluating our proposed model which leveraged conversational-context embeddings learned from training corpus and compare it with a standard end-to-end speech recognition models without conversational-context embedding. As seen in Table 2 , we obtained a performance gain over the baseline by using conversational-context embeddings which is learned from training set.
Pre-training decoder network
Then, we observe that pre-training of decoder network can improve accuracy further as shown in Table 2 . Using pre-training the decoder network, we achieved 5% relative improvement in WER on SWBD set. Since we add external parameters in decoder network to learn conversational-context embeddings, our model requires more efforts to learn these additional parameters. To relieve this issue, we used pre-training techniques to train decoder network with text-only data first. We simply used a mask on top of the Encoder/Attention layer so that we can control the gradients of batches contains text-only data and do not update the Encoder/Attention sub-network parameters.
Use of words/sentence embeddings
Next, we evaluated the use of pretrained external embeddings (fastText and BERT). We initially observed that we can obtain 2.4% relative improvement over (the model with decoder pretraining) in WER by using fastText for additional word embeddings to the gated decoder network.
We also extensively evaluated various ways to use fastText/BERT for conversational-context embeddings. Both methods with fastText and with BERT shows significant improvement from the baseline as well as vanilla conversational-context aware model.
Conversational-context Receptive Field
We also investigate the effect of the number of utterance history being encoded. We tried different $N = [1, 5, 9]$ number of utterance histories to learn the conversational-context embeddings. Figure 3 shows the relative improvements in the accuracy on the Dev set ( "Training and decoding" ) over the baseline “non-conversational” model. We show the improvements on the two different methods of merging the contextual embeddings, namely mean and concatenation. Typically increasing the receptive field of the conversational-context helps improve the model. However, as the number of utterence history increased, the number of trainable parameters of the concatenate model increased making it harder for the model to train. This led to a reduction in the accuracy.
We also found that using 5-utterance history with concatenation performed best (15%) on the SWBD set, and using 9-number of utterance history with mean method performed best (5%) on CH set. We also observed that the improvement diminished when we used 9-utterance history for SWBD set, unlike CH set. One possible explanation is that the conversational-context may not be relevant to the current utterance prediction or the model is overfitting.
Sampling technique
We also experiment with an utterance level sampling strategy with various sampling ratio, $[0.0, 0.2, 0.5, 1.0]$ . Sampling techniques have been extensively used in sequence prediction tasks to reduce overfitting BIBREF51 by training the model conditioning on generated tokens from the model itself, which is how the model actually do at inference, rather than the ground-truth tokens. Similar to choosing previous word tokens from the ground truth or from the model output, we apply it to choose previous utterance from the ground truth or from the model output for learning conversational-context embeddings. Figure 4 shows the relative improvement in the development accuracy ( "Training and decoding" ) over the $1.0$ sampling rate which is always choosing model's output. We found that a sampling rate of 20% performed best.
Analysis of context embeddings
We develop a scoring function, $s(i,j)$ to check if our model conserves the conversational consistency for validating the accuracy improvement of our approach. The scoring function measures the average of the conversational distances over every consecutive hypotheses generated from a particular model. The conversational distance is calculated by the Euclidean distance, $\text{dist}(e_i, e_j)$ of the fixed-length vectors $e_i, e_j$ which represent the model's $i, j$ -th hypothesis, respectively. To obtain a fixed-length vector, utterance embedding, given the model hypothesis, we use BERT sentence embedding as an oracle. Mathematically it can be written as, $ s(i,j) = \frac{1}{N}\sum _{i,j \in \texttt {eval}}(\text{dist}(e_i,e_j)) $
where, $i, j$ is a pair of consecutive hypotheses in evaluation data $\texttt {eval}$ , $N$ is the total number of $i,j$ pairs, $e_i, e_j$ are BERT embeddings. In our experiment, we select the pairs of consecutive utterances from the reference that show lower distance score at least baseline hypotheses.
From this process, we obtained three conversational distance scores from 1) the reference transcripts, 2) the hypotheses of our vanilla conversational model which is not using BERT, and 3) the hypotheses of our baseline model. Figure 5 shows the score comparison.
We found that our proposed model was 7.4% relatively closer to the reference than the baseline. This indicates that our conversational-context embedding leads to improved similarity across adjacent utterances, resulting in better processing a long conversation.
Conclusion
We have introduced a novel method for conversational-context aware end-to-end speech recognition based on a gated network that incorporates word/sentence/speech embeddings. Unlike prior work, our model is trained on conversational datasets to predict a word, conditioning on multiple preceding conversational-context representations, and consequently improves recognition accuracy of a long conversation. Moreover, our gated network can incorporate effectively with text-based external resources, word or sentence embeddings (i.e., fasttext, BERT) within an end-to-end framework and so that the whole system can be optimized towards our final objectives, speech recognition accuracy. By incorporating external embeddings with gating mechanism, our model can achieve further improvement with better conversational-context representation. We evaluated the models on the Switchboard conversational speech corpus and show that our proposed model using gated conversational-context embedding show 15%, 5% relative improvement in WER compared to a baseline model for Switchboard and CallHome subsets respectively. Our model was shown to outperform standard end-to-end speech recognition models trained on isolated sentences. This work is easy to scale and can potentially be applied to any speech related task that can benefit from longer context information, such as spoken dialog system, sentimental analysis.
Acknowledgments
We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. This work also used the Bridges system, which is supported by NSF award number ACI-1445606, at the Pittsburgh Supercomputing Center (PSC). | BERT generates sentence embeddings that represent words in context. These sentence embeddings are merged into a single conversational-context vector that is used to calculate a gated embedding and is later combined with the output of the decoder h to provide the gated activations for the next hidden layer. |
c77d6061d260f627f2a29a63718243bab5a6ed5a | c77d6061d260f627f2a29a63718243bab5a6ed5a_0 | Q: How different is the dataset size of source and target?
Text: Question Answering
One of the most important characteristics of an intelligent system is to understand stories like humans do. A story is a sequence of sentences, and can be in the form of plain text BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 or spoken content BIBREF0 , where the latter usually requires the spoken content to be first transcribed into text by automatic speech recognition (ASR), and the model will subsequently process the ASR output. To evaluate the extent of the model's understanding of the story, it is asked to answer questions about the story. Such a task is referred to as question answering (QA), and has been a long-standing yet challenging problem in natural language processing (NLP).
Several QA scenarios and datasets have been introduced over the past few years. These scenarios differ from each other in various ways, including the length of the story, the format of the answer, and the size of the training set. In this work, we focus on context-aware multi-choice QA, where the answer to each question can be obtained by referring to its accompanying story, and each question comes with a set of answer choices with only one correct answer. The answer choices are in the form of open, natural language sentences. To correctly answer the question, the model is required to understand and reason about the relationship between the sentences in the story.
Transfer Learning
Transfer learning BIBREF7 is a vital machine learning technique that aims to use the knowledge learned from one task and apply it to a different, but related, task in order to either reduce the necessary fine-tuning data size or improve performance. Transfer learning, also known as domain adaptation, has achieved success in numerous domains such as computer vision BIBREF8 , ASR BIBREF9 , BIBREF10 , and NLP BIBREF11 , BIBREF12 . In computer vision, deep neural networks trained on a large-scale image classification dataset such as ImageNet BIBREF13 have proven to be excellent feature extractors for a broad range of visual tasks such as image captioning BIBREF14 , BIBREF15 , BIBREF16 and visual question answering BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , among others. In NLP, transfer learning has also been successfully applied to tasks like sequence tagging BIBREF21 , syntactic parsing BIBREF22 and named entity recognition BIBREF23 , among others.
The procedure of transfer learning in this work is straightforward and includes two steps. The first step is to pre-train the model on one MCQA dataset referred to as the source task, which usually contains abundant training data. The second step is to fine-tune the same model on the other MCQA dataset, which is referred to as the target task, that we actually care about, but that usually contains much less training data. The effectiveness of transfer learning is evaluated by the model's performance on the target task.
In supervised transfer learning, both the source and target datasets provide the correct answer to each question during pre-training and fine-tuning, and the QA model is guided by the correct answer to optimize its objective function in a supervised manner in both stages.
We also consider unsupervised transfer learning where the correct answer to each question in the target dataset is not available. In other words, the entire process is supervised during pre-training, but unsupervised during fine-tuning. A self-labeling technique inspired by BIBREF26 , BIBREF24 , BIBREF25 is used during fine-tuning on the target dataset. We present the proposed algorithm for unsupervised transfer learning in Algorithm "Conclusion and Future Work" . [!htbp] Unsupervised QA Transfer Learning [1] Source dataset with correct answer to each question; Target dataset without any answer; Number of training epochs. Optimal QA model $M^{*}$ Pre-train QA model $M$ on the source dataset. For each question in the target dataset, use $M$ to predict its answer. For each question, assign the predicted answer to the question as the correct one. Fine-tune $M$ on the target dataset as usual. Reach the number of training epochs.
Transfer Learning for QA
Although transfer learning has been successfully applied to various applications, its applicability to QA has yet to be well-studied. In this paper, we tackle the TOEFL listening comprehension test BIBREF0 and MCTest BIBREF1 with transfer learning from MovieQA BIBREF2 using two existing QA models. Both models are pre-trained on MovieQA and then fine-tuned on each target dataset, so that their performance on the two target datasets are significantly improved. In particular, one of the models achieves the state-of-the-art on all target datasets; for the TOEFL listening comprehension test, it outperforms the previous best model by 7%.
Transfer learning without any labeled data from the target domain is referred to as unsupervised transfer learning. Motivated by the success of unsupervised transfer learning for speaker adaptation BIBREF24 , BIBREF25 and spoken document summarization BIBREF26 , we further investigate whether unsupervised transfer learning is feasible for QA.
Although not well studied in general, transfer Learning for QA has been explored recently. To the best of our knowledge, BIBREF27 is the first work that attempted to apply transfer learning for machine comprehension. The authors showed only limited transfer between two QA tasks, but the transferred system was still significantly better than a random baseline. BIBREF28 tackled a more specific task of biomedical QA with transfer learning from a large-scale dataset. The work most similar to ours is by BIBREF29 , where the authors used a simple transfer learning technique and achieved significantly better performance. However, none of these works study unsupervised transfer learning, which is especially crucial when the target dataset is small. BIBREF30 proposed a two-stage synthesis network that can generate synthetic questions and answers to augment insufficient training data without annotations. In this work, we aim to handle the case that the questions from the target domain are available.
Task Descriptions and Approaches
Among several existing QA settings, in this work we focus on multi-choice QA (MCQA). We are particularly interested in understanding whether a QA model can perform better on one MCQA dataset with knowledge transferred from another MCQA dataset. In Section "Question Answering Experiments" , we first formalize the task of MCQA. We then describe the procedures for transfer learning from one dataset to another in Section "Conclusion and Future Work" . We consider two kinds of settings for transfer learning in this paper, one is supervised and the other is unsupervised.
Multi-Choices QA
In MCQA, the inputs to the model are a story, a question, and several answer choices. The story, denoted by $\mathbf {S}$ , is a list of sentences, where each of the sentences is a sequence of words from a vocabulary set $V$ . The question and each of the answer choices, denoted by $\mathbf {Q}$ and $\mathbf {C}$ , are both single sentences also composed of words from $V$ . The QA model aims to choose one correct answer from multiple answer choices based on the information provided in $\mathbf {S}$ and $\mathbf {Q}$ .
Datasets
We used MovieQA BIBREF2 as the source MCQA dataset, and TOEFL listening comprehension test BIBREF0 and MCTest BIBREF1 as two separate target datasets. Examples of the three datasets are shown in Table 1 .
QA Neural Network Models
Among numerous models proposed for multiple-choice QA BIBREF32 , BIBREF33 , BIBREF0 , we adopt the End-to-End Memory Network (MemN2N) BIBREF34 and Query-Based Attention CNN (QACNN) BIBREF35 , both open-sourced, to conduct the experiments. Below we briefly introduce the two models in Section "End-to-End Memory Networks" and Section "Query-Based Attention CNN" , respectively. For the details of the models, please refer to the original papers.
End-to-End Memory Networks
An End-to-End Memory Network (MemN2N) first transforms $\mathbf {Q}$ into a vector representation with an embedding layer $B$ . At the same time, all sentences in $\mathbf {S}$ are also transformed into two different sentence representations with two additional embedding layers $A$ and $C$ . The first sentence representation is used in conjunction with the question representation to produce an attention-like mechanism that outputs the similarity between each sentence in $\mathbf {S}$ and $\mathbf {Q}$ . The similarity is then used to weight the second sentence representation. We then obtain the sum of the question representation and the weighted sentence representations over $\mathbf {S}$ as $\mathbf {Q}^\prime $ . In the original MemN2N, $\mathbf {Q}^\prime $ is decoded to provide the estimation of the probability of being an answer for each word within a fixed set. The word with the highest probability is then selected as the answer. However, in multiple-choice QA, $B$0 is in the form of open, natural language sentences instead of a single word. Hence we modify MemN2N by adding an embedding layer $B$1 to encode $B$2 as a vector representation $B$3 by averaging the embeddings of words in $B$4 . We then compute the similarity between each choice representation $B$5 and $B$6 . The choice $B$7 with the highest probability is then selected as the answer.
Query-Based Attention CNN
A Query-Based Attention CNN (QACNN) first uses an embedding layer $E$ to transform $\mathbf {S}, \mathbf {Q}$ , and $\mathbf {C}$ into a word embedding. Then a compare layer generates a story-question similarity map $\mathbf {SQ}$ and a story-choice similarity map $\mathbf {SC}$ . The two similarity maps are then passed into a two-stage CNN architecture, where a question-based attention mechanism on the basis of $\mathbf {SQ}$ is applied to each of the two stages. The first stage CNN generates a word-level attention map for each sentence in $\mathbf {S}$ , which is then fed into the second stage CNN to generate a sentence-level attention map, and yield choice-answer features for each of the choices. Finally, a classifier that consists of two fully-connected layers collects the information from every choice answer feature and outputs the most likely answer. The trainable parameters are the embedding layer $E$ that transforms $\mathbf {S}, \mathbf {Q},$ and $\mathbf {C}$ into word embeddings, the two-stage CNN $\mathbf {S}, \mathbf {Q}$0 and $\mathbf {S}, \mathbf {Q}$1 that integrate information from the word to the sentence level, and from the sentence to the story level, and the two fully-connected layers $\mathbf {S}, \mathbf {Q}$2 and $\mathbf {S}, \mathbf {Q}$3 that make the final prediction. We mention the trainable parameters here because in Section "Question Answering Experiments" we will conduct experiments to analyze the transferability of the QACNN by fine-tuning some parameters while keeping others fixed. Since QACNN is a newly proposed QA model has a relatively complex structure, we illustrate its architecture in Figure 1 , which is enough for understanding the rest of the paper. Please refer to the original paper BIBREF35 for more details.
Training Details
For pre-training MemN2N and QACNN on MovieQA, we followed the exact same procedure as in BIBREF2 and BIBREF35 , respectively. Each model was trained on the training set of the MovieQA task and tuned on the dev set, and the best performing models on the dev set were later fine-tuned on the target dataset. During fine-tuning, the model was also trained on the training set of target datasets and tuned on the dev set, and the performance on the testing set of the target datasets was reported as the final result. We use accuracy as the performance measurement.
Supervised Transfer Learning
Table 2 reports the results of our transfer learning on TOEFL-manual, TOEFL-ASR, MC160, and MC500, as well as the performance of the previous best models and several ablations that did not use pre-training or fine-tuning. From Table 2 , we have the following observations.
Rows (a) and (g) show the respective results when the QACNN and MemN2N are trained directly on the target datasets without pre-training on MovieQA. Rows (b) and (h) show results when the models are trained only on the MovieQA data. Rows (c) and (i) show results when the models are trained on both MovieQA and each of the four target datasets, and tested on the respective target dataset. We observe that the results achieved in (a), (b), (c), (g), (h), and (i) are worse than their fine-tuned counterparts (d), (e), (f), and (j). Through transfer learning, both QACNN and MemN2N perform better on all the target datasets. For example, QACNN only achieves 57.5% accuracy on MC160 without pre-training on MovieQA, but the accuracy increases by 18.9% with pre-training (rows (d) vs. (a)). In addition, with transfer learning, QACNN outperforms the previous best models on TOEFL-manual by 7%, TOEFL-ASR BIBREF33 by 6.5%, MC160 BIBREF36 by 1.1%, and MC500 BIBREF32 by 1.3%, and becomes the state-of-the-art on all target datasets.
For the QACNN, the training parameters are $E, W_{CNN}^{(1)}, W_{CNN}^{(2)}, W_{FC}^{(1)}$ , and $W_{FC}^{(2)}$ (Section "Query-Based Attention CNN" ). To better understand how transfer learning affects the performance of QACNN, we also report the results of keeping some parameters fixed and only fine-tuning other parameters. We choose to fine-tune either only the last fully-connected layer $W_{FC}^{(2)}$ while keeping other parameters fixed (row (d) in Table 2 ), the last two fully-connected layers $W_{FC}^{(1)}$ and $W_{FC}^{(2)}$ (row (e)), and the entire QACNN (row (f)). For TOEFL-manual, TOEFL-ASR, and MC500, QACNN performs the best when only the last two fully-connected layers were fine-tuned; for MC160, it performs the best when only the last fully-connected layer was fine-tuned. Note that for training the QACNN, we followed the same procedure as in BIBREF35 , whereby pre-trained GloVe word vectors BIBREF37 were used to initialize the embedding layer, which were not updated during training. Thus, the embedding layer does not depend on the training set, and the effective vocabularies are the same.
It is interesting to see that fine-tuning the entire QACNN doesn't necessarily produce the best result. For MC500, the accuracy of QACNN drops by 4.6% compared to just fine-tuning the last two fully-connected layers (rows (f) vs. (e)). We conjecture that this is due to the amount of training data of the target datasets - when the training set of the target dataset is too small, fine-tuning all the parameters of a complex model like QACNN may result in overfitting. This discovery aligns with other domains where transfer learning is well-studied such as object recognition BIBREF38 .
We expected to see that a MemN2N, when trained directly on the target dataset without pre-training on MovieQA, would outperform a MemN2N pre-trained on MovieQA without fine-tuning on the target dataset (rows (g) vs. (h)), since the model is evaluated on the target dataset. However, for the QACNN this is surprisingly not the case - QACNN pre-trained on MovieQA without fine-tuning on the target dataset outperforms QACNN trained directly on the target dataset without pre-training on MovieQA (rows (b) vs. (a)). We attribute this to the limited size of the target dataset and the complex structure of the QACNN.
We conducted experiments to study the relationship between the amount of training data from the target dataset for fine-tuning the model and the performance. We first pre-train the models on MovieQA, then vary the training data size of the target dataset used to fine-tune them. Note that for QACNN, we only fine-tune the last two fully-connected layers instead of the entire model, since doing so usually produces the best performance according to Table 2 . The results are shown in Table 3 . As expected, the more training data is used for fine-tuning, the better the model's performance is. We also observe that the extent of improvement from using 0% to 25% of target training data is consistently larger than using from 25% to 50%, 50% to 75%, and 75% to 100%. Using the QACNN fine-tuned on TOEFL-manual as an example, the accuracy of the QACNN improves by 2.7% when varying the training size from 0% to 25%, but only improves by 0.9%, 0.5%, and 0.7% when varying the training size from 25% to 50%, 50% to 75%, and 75% to 100%, respectively.
We also vary the size of MovieQA for pre-training to study how large the source dataset should be to make transfer learning feasible. The results are shown in Table 4 . We find that even a small amount of source data can help. For example, by using only 25% of MovieQA for pre-training, the accuracy increases 6.3% on MC160. This is because 25% of MovieQA training set (2,462 examples) is still much larger than the MC160 training set (280 examples). As the size of the source dataset increases, the performance of QACNN continues to improve.
We are interested in understanding what types of questions benefit the most from transfer learning. According to the official guide to the TOEFL test, the questions in TOEFL can be divided into 3 types. Type 1 questions are for basic comprehension of the story. Type 2 questions go beyond basic comprehension, but test the understanding of the functions of utterances or the attitude the speaker expresses. Type 3 questions further require the ability of making connections between different parts of the story, making inferences, drawing conclusions, or forming generalizations. We used the split provided by BIBREF33 , which contains 70/18/34 Type 1/2/3 questions. We compare the performance of the QACNN and MemN2N on different types of questions in TOEFL-manual with and without pre-training on MovieQA, and show the results in Figure 2 . From Figure 2 we can observe that for both the QACNN and MemN2N, their performance on all three types of questions improves after pre-training, showing that the effectiveness of transfer learning is not limited to specific types of questions.
Unsupervised Transfer Learning
So far, we have studied the property of supervised transfer learning for QA, which means that during pre-training and fine-tuning, both the source and target datasets provide the correct answer for each question. We now conduct unsupervised transfer learning experiments described in Section "Conclusion and Future Work" (Algorithm "Conclusion and Future Work" ), where the answers to the questions in the target dataset are not available. We used QACNN as the QA model and all the parameters $(E, W_{CNN}^{(1)}, W_{CNN}^{(2)}, W_{FC}^{(1)},$ and $W_{FC}^{(2)})$ were updated during fine-tuning in this experiment. Since the range of the testing accuracy of the TOEFL-series (TOEFL-manual and TOEFL-ASR) is different from that of MCTest (MC160 and MC500), their results are displayed separately in Figure UID29 and Figure UID30 , respectively.
From Figure UID29 and Figure UID30 we can observe that without ground truth in the target dataset for supervised fine-tuning, transfer learning from a source dataset can still improve the performance through a simple iterative self-labeling mechanism. For TOEFL-manual and TOEFL-ASR, QACNN achieves the highest testing accuracy at Epoch 7 and 8, outperforming its counterpart without fine-tuning by approximately 4% and 5%, respectively. For MC160 and MC500, the QACNN achieves the peak at Epoch 3 and 6, outperforming its counterpart without fine-tuning by about 2% and 6%, respectively. The results also show that the performance of unsupervised transfer learning is still worse than supervised transfer learning, which is not surprising, but the effectiveness of unsupervised transfer learning when no ground truth labels are provided is validated.
To better understand the unsupervised transfer learning process of QACNN, we visualize the changes of the word-level attention map during training Epoch 1, 4, 7, and 10 in Figure 4 . We use the same question from TOEFL-manual as shown in Table 1 as an example. From Figure 4 we can observe that as the training epochs increase, the QACNN focuses more on the context in the story that is related to the question and the correct answer choice. For example, the correct answer is related to “class project”. In Epoch 1 and 4, the model does not focus on the phrase “class representation”, but the model attends on the phrase in Epoch 7 and 10. This demonstrates that even without ground truth, the iterative process in Algorithm "Conclusion and Future Work" is still able to lead the QA model to gradually focus more on the important part of the story for answering the question.
Conclusion and Future Work
In this paper we demonstrate that a simple transfer learning technique can be very useful for the task of multi-choice question answering. We use a QACNN and a MemN2N as QA models, with MovieQA as the source task and a TOEFL listening comprehension test and MCTest as the target tasks. By pre-training on MovieQA, the performance of both models on the target datasets improves significantly. The models also require much less training data from the target dataset to achieve similar performance to those without pre-training. We also conduct experiments to study the influence of transfer learning on different types of questions, and show that the effectiveness of transfer learning is not limited to specific types of questions. Finally, we show that by a simple iterative self-labeling technique, transfer learning is still useful, even when the correct answers for target QA dataset examples are not available, through quantitative results and visual analysis.
One area of future research will be generalizing the transfer learning results presented in this paper to other QA models and datasets. In addition, since the original data format of the TOEFL listening comprehension test is audio instead of text, it is worth trying to initialize the embedding layer of the QACNN with semantic or acoustic word embeddings learned directly from speech BIBREF39 , BIBREF40 , BIBREF41 instead of those learned from text BIBREF42 , BIBREF37 . | the training dataset is large while the target dataset is usually much smaller |
4c7b29f6e3cc1e902959a1985146ccc0b15fe521 | 4c7b29f6e3cc1e902959a1985146ccc0b15fe521_0 | Q: How do you find the entity descriptions?
Text: Introduction
Knowledge about entities is essential for understanding human language. This knowledge can be attributional (e.g., canFly, isEdible), type-based (e.g., isFood, isPolitician, isDisease) or relational (e.g, marriedTo, bornIn). Knowledge bases (KBs) are designed to store this information in a structured way, so that it can be queried easily. Examples of such KBs are Freebase BIBREF3 , Wikipedia, Google knowledge graph and YAGO BIBREF4 . For automatic updating and completing the entity knowledge, text resources such as news, user forums, textbooks or any other data in the form of text are important sources. Therefore, information extraction methods have been introduced to extract knowledge about entities from text. In this paper, we focus on the extraction of entity types, i.e., assigning types to – or typing – entities. Type information can help extraction of relations by applying constraints on relation arguments.
We address a problem setting in which the following are given: a KB with a set of entities $E$ , a set of types $T$ and a membership function $m: E \times T \mapsto \lbrace 0,1\rbrace $ such that $m(e,t)=1$ iff entity $e$ has type $t$ ; and a large corpus $C$ in which mentions of $E$ are annotated. In this setting, we address the task of fine-grained entity typing: we want to learn a probability function $S(e,t)$ for a pair of entity $e$ and type $T$0 and based on $T$1 infer whether $T$2 holds, i.e., whether entity $T$3 is a member of type $T$4 .
We address this problem by learning a multi-level representation for an entity that contains the information necessary for typing it. One important source is the contexts in which the entity is used. We can take the standard method of learning embeddings for words and extend it to learning embeddings for entities. This requires the use of an entity linker and can be implemented by replacing all occurrences of the entity by a unique token. We refer to entity embeddings as entity-level representations. Previously, entity embeddings have been learned mostly using bag-of-word models like word2vec (e.g., by Wang14joint and yyhs15fig). We show below that order information is critical for high-quality entity embeddings.
Entity-level representations are often uninformative for rare entities, so that using only entity embeddings is likely to produce poor results. In this paper, we use entity names as a source of information that is complementary to entity embeddings. We define an entity name as a noun phrase that is used to refer to an entity. We learn character and word level representations of entity names.
For the character-level representation, we adopt different character-level neural network architectures. Our intuition is that there is sub/cross word information, e.g., orthographic patterns, that is helpful to get better entity representations, especially for rare entities. A simple example is that a three-token sequence containing an initial like “P.” surrounded by two capitalized words (“Rolph P. Kugl”) is likely to refer to a person.
We compute the word-level representation as the sum of the embeddings of the words that make up the entity name. The sum of the embeddings accumulates evidence for a type/property over all constituents, e.g., a name containing “stadium”, “lake” or “cemetery” is likely to refer to a location. In this paper, we compute our word level representation with two types of word embeddings: (i) using only contextual information of words in the corpus, e.g., by word2vec BIBREF1 and (ii) using subword as well as contextual information of words, e.g., by Facebook's recently released fasttext BIBREF0 .
In this paper, we integrate character-level and word-level with entity-level representations to improve the results of previous work on fine-grained typing of KB entities. We also show how descriptions of entities in a KB can be a complementary source of information to our multi-level representation to improve the results of entity typing, especially for rare entities.
Our main contributions in this paper are:
We release our dataset and source codes: cistern.cis.lmu.de/figment2/.
Related Work
Entity representation. Two main sources of information used for learning entity representation are: (i) links and descriptions in KB, (ii) name and contexts in corpora. We focus on name and contexts in corpora, but we also include (Wikipedia) descriptions. We represent entities on three levels: entity, word and character. Our entity-level representation is similar to work on relation extraction BIBREF5 , BIBREF6 , entity linking BIBREF7 , BIBREF8 , and entity typing BIBREF9 . Our word-level representation with distributional word embeddings is similarly used to represent entities for entity linking BIBREF10 and relation extraction BIBREF11 , BIBREF5 . Novel entity representation methods we introduce in this paper are representation based on fasttext BIBREF0 subword embeddings, several character-level representations, “order-aware” entity-level embeddings and the combination of several different representations into one multi-level representation.
Character-subword level neural networks. Character-level convolutional neural networks (CNNs) are applied by Santos14pos to part of speech (POS) tagging, by Santos15ner, ma2016, and chiu2016 to named entity recognition (NER), by Zhang15ch and Zhang15scratch to sentiment analysis and text categorization, and by kim15 to language modeling (LM). Character-level LSTM is applied by LingDyer15ovwr to LM and POS tagging, by lampe2016 to NER, by BallesterosDyer15chlstm to parsing morphologically rich languages, and by cao2016 to learning word embeddings. subword16 learn word embeddings by representing words with the average of their character ngrams (subwords) embeddings. Similarly, chen2015 extends word2vec for Chinese with joint modeling with characters.
Fine-grained entity typing. Our task is to infer fine-grained types of KB entities. KB completion is an application of this task. yyhs15fig's FIGMENT system addresses this task with only contextual information; they do not use character-level and word-level features of entity names. neelakantan2015inferring and xie16dkrl also address a similar task, but they rely on entity descriptions in KBs, which in many settings are not available. The problem of Fine-grained mention typing (FGMT) BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 is related to our task. FGMT classifies single mentions of named entities to their context dependent types whereas we attempt to identify all types of a KB entity from the aggregation of all its mentions. FGMT can still be evaluated in our task by aggregating the mention level decisions but as we will show in our experiments for one system, i.e., FIGER BIBREF13 , our entity embedding based models are better in entity typing.
Fine-grained entity typing
Given (i) a KB with a set of entities $E$ , (ii) a set of types $T$ , and (iii) a large corpus $C$ in which mentions of $E$ are linked, we address the task of fine-grained entity typing BIBREF9 : predict whether entity $e$ is a member of type $t$ or not. To do so, we use a set of training examples to learn $P(t|e)$ : the probability that entity $e$ has type $t$ . These probabilities can be used to assign new types to entities covered in the KB as well as typing unknown entities.
We learn $P(t|e)$ with a general architecture; see Figure 1 . The output layer has size $|T|$ . Unit $t$ of this layer outputs the probability for type $t$ . “Entity Representation” ( $\vec{v}(e)$ ) is the vector representation of entity $e$ – we will describe in detail in the rest of this section what forms $\vec{v}(e)$ takes. We model $P(t|e)$ as a multi-label classification, and train a multilayer perceptron (MLP) with one hidden layer:
$$\big [ P(t_1|e) \ldots P(t_T|e) \big ] = \sigma \Big (\textbf {W}\mbox{$_{\hbox{\scriptsize out}}$} f\big (\textbf {W}\mbox{$_{\hbox{\scriptsize in}}$}\vec{v}(e)\big )\Big )$$ (Eq. 5)
where $\textbf {W}\mbox{$_{\hbox{\scriptsize in}}$} \in \mathbb {R}^{h\times d} $ is the weight matrix from $\vec{v}(e) \in \mathbb {R}^d$ to the hidden layer with size $h$ . $f$ is the rectifier function. $\textbf {W}\mbox{$_{\hbox{\scriptsize out}}$} \in \mathbb {R}^{|T| \times h} $ is the weight matrix from hidden layer to output layer of size $|T|$ . $\sigma $ is the sigmoid function. Our objective is binary cross entropy summed over types: $ \sum _{t}{-\Big (m_t \log {p_t} + (1 - m_t) \log {(1 - p_t)} \Big )} $
where $m_t$ is the truth and $p_t$ the prediction.
The key difficulty when trying to compute $P(t|e)$ is in learning a good representation for entity $e$ . We make use of contexts and name of $e$ to represent its feature vector on the three levels of entity, word and character.
Entity-level representation
Distributional representations or embeddings are commonly used for words. The underlying hypothesis is that words with similar meanings tend to occur in similar contexts BIBREF18 and therefore cooccur with similar context words. We can extend the distributional hypothesis to entities (cf. Wang14joint, yyhs15fig): entities with similar meanings tend to have similar contexts. Thus, we can learn a $d$ dimensional embedding $\vec{v}(e)$ of entity $e$ from a corpus in which all mentions of the entity have been replaced by a special identifier. We refer to these entity vectors as the entity level representation (ELR).
In previous work, order information of context words (relative position of words in the contexts) was generally ignored and objectives similar to the SkipGram (henceforth: SKIP) model were used to learn $\vec{v}(e)$ . However, the bag-of-word context is difficult to distinguish for pairs of types like (restaurant,food) and (author,book). This suggests that using order aware embedding models is important for entities. Therefore, we apply wang2vec15's extended version of SKIP, Structured SKIP (SSKIP). It incorporates the order of context words into the objective. We compare it with SKIP embeddings in our experiments.
Word-level representation
Words inside entity names are important sources of information for typing entities. We define the word-level representation (WLR) as the average of the embeddings of the words that the entity name contains $ \vec{v}(e) = 1/n \sum _{i=1}^n \vec{v}(w_i) $
where $\vec{v}(w_i)$ is the embedding of the $i\mbox{$^{\hbox{\scriptsize th}}$}$ word of an entity name of length $n$ . We opt for simple averaging since entity names often consist of a small number of words with clear semantics. Thus, averaging is a promising way of combining the information that each word contributes.
The word embedding, $\vec{w}$ , itself can be learned from models with different granularity levels. Embedding models that consider words as atomic units in the corpus, e.g., SKIP and SSKIP, are word-level. On the other hand, embedding models that represent words with their character ngrams, e.g., fasttext BIBREF0 , are subword-level. Based on this, we consider and evaluate word-level WLR (WWLR) and subword-level WLR (SWLR) in this paper.
Character-level representation
For computing the character level representation (CLR), we design models that try to type an entity based on the sequence of characters of its name. Our hypothesis is that names of entities of a specific type often have similar character patterns. Entities of type ethnicity often end in “ish” and “ian”, e.g., “Spanish” and “Russian”. Entities of type medicine often end in “en”: “Lipofen”, “acetaminophen”. Also, some types tend to have specific cross-word shapes in their entities, e.g., person names usually consist of two words, or music names are usually long, containing several words.
The first layer of the character-level models is a lookup table that maps each character to an embedding of size $d_c$ . These embeddings capture similarities between characters, e.g., similarity in type of phoneme encoded (consonant/vowel) or similarity in case (lower/upper). The output of the lookup layer for an entity name is a matrix $C \in \mathbb {R}^{l \times d_c}$ where $l$ is the maximum length of a name and all names are padded to length $l$ . This length $l$ includes special start/end characters that bracket the entity name.
We experiment with four architectures to produce character-level representations in this paper: FORWARD (direct forwarding of character embeddings), CNNs, LSTMs and BiLSTMs. The output of each architecture then takes the place of the entity representation $\vec{v}(e)$ in Figure 1 .
FORWARD simply concatenates all rows of matrix $C$ ; thus, $\vec{v}(e) \in \mathbb {R}^{d_c*l}$ .
The CNN uses $k$ filters of different window widths $w$ to narrowly convolve $C$ . For each filter $H \in \mathbb {R}^{d_c\times w}$ , the result of the convolution of $H$ over matrix $C$ is feature map $f \in \mathbb {R}^{l-w+1}$ :
$f[i] = \mbox{rectifier}(C_{[:, i : i + w - 1]} \odot H + b)$
where rectifier is the activation function, $b$ is the bias, $C_{[:, i : i + w - 1]}$ are the columns $i$ to $i + w - 1$ of $C$ , $ 1\le w\le 10$ are the window widths we consider and $\odot $ is the sum of element-wise multiplication. Max pooling then gives us one feature for each filter. The concatenation of all these features is our representation: $\vec{v}(e) \in \mathbb {R}^{k}$ . An example CNN architecture is show in Figure 2 .
The input to the LSTM is the character sequence in matrix $C$ , i.e., $x_1,\dots ,x_l \in \mathbb {R}^{d_c}$ . It generates the state sequence $h_1, . . . ,h_{l+1}$ and the output is the last state $\vec{v}(e) \in \mathbb {R}^{d_h}$ .
The BiLSTM consists of two LSTMs, one going forward, one going backward. The first state of the backward LSTM is initialized as $h_{l+1}$ , the last state of the forward LSTM. The BiLSTM entity representation is the concatenation of last states of forward and backward LSTMs, i.e., $\vec{v}(e) \in \mathbb {R}^{2 * d_h}$ .
Multi-level representations
Our different levels of representations can give complementary information about entities.
WLR and CLR. Both WLR models, SWLR and WWLR, do not have access to the cross-word character ngrams of entity names while CLR models do. Also, CLR is task specific by training on the entity typing dataset while WLR is generic. On the other hand, WWLR and SWLR models have access to information that CLR ignores: the tokenization of entity names into words and embeddings of these words. It is clear that words are particularly important character sequences since they often correspond to linguistic units with clearly identifiable semantics – which is not true for most character sequences. For many entities, the words they contain are a better basis for typing than the character sequence. For example, even if “nectarine” and “compote” did not occur in any names in the training corpus, we can still learn good word embeddings from their non-entity occurrences. This then allows us to correctly type the entity “Aunt Mary's Nectarine Compote” as food based on the sum of the word embeddings.
WLR/CLR and ELR. Representations from entity names, i.e., WLR and CLR, by themselves are limited because many classes of names can be used for different types of entities; e.g., person names do not contain hints as to whether they are referring to a politician or athlete. In contrast, the ELR embedding is based on an entity's contexts, which are often informative for each entity and can distinguish politicians from athletes. On the other hand, not all entities have sufficiently many informative contexts in the corpus. For these entities, their name can be a complementary source of information and character/word level representations can increase typing accuracy.
Thus, we introduce joint models that use combinations of the three levels. Each multi-level model concatenates several levels. We train the constituent embeddings as follows. WLR and ELR are computed as described above and are not changed during training. CLR – produced by one of the character-level networks described above – is initialized randomly and then tuned during training. Thus, it can focus on complementary information related to the task that is not already present in other levels. The schematic diagram of our multi-level representation is shown in Figure 3 .
Setup
Entity datasets and corpus. We address the task of fine-grained entity typing and use yyhs15fig's FIGMENT dataset for evaluation. The FIGMENT corpus is part of a version of ClueWeb in which Freebase entities are annotated using FACC1 BIBREF20 , BIBREF21 . The FIGMENT entity datasets contain 200,000 Freebase entities that were mapped to 102 FIGER types BIBREF13 . We use the same train (50%), dev (20%) and test (30%) partitions as yyhs15fig and extract the names from mentions of dataset entities in the corpus. We take the most frequent name for dev and test entities and three most frequent names for train (each one tagged with entity types).
Adding parent types to refine entity dataset. FIGMENT ignores that FIGER is a proper hierarchy of types; e.g., while hospital is a subtype of building according to FIGER, there are entities in FIGMENT that are hospitals, but not buildings. Therefore, we modified the FIGMENT dataset by adding for each assigned type (e.g., hospital) its parents (e.g., building). This makes FIGMENT more consistent and eliminates spurious false negatives (building in the example).
We now describe our baselines: (i) BOW & NSL: hand-crafted features, (ii) FIGMENT BIBREF9 and (iii) adapted version of FIGER BIBREF13 .
We implement the following two feature sets from the literature as a hand-crafted baseline for our character and word level models. (i) BOW: individual words of entity name (both as-is and lowercased); (ii) NSL (ngram-shape-length): shape and length of the entity name (cf. ling2012fine), character $n$ -grams, $1 \le n \le n\mbox{$_{\hbox{\scriptsize max}}$}, n\mbox{$_{\hbox{\scriptsize max}}$}=5$ (we also tried $n\mbox{$_{\hbox{\scriptsize max}}$}=7$ , but results were worse on dev) and normalized character $n$ -grams: lowercased, digits replaced by “7”, punctuation replaced by “.”. These features are represented as a sparse binary vector $\vec{v}(e)$ that is input to the architecture in Figure 1 .
FIGMENT is the model for entity typing presented by yyhs15fig. The authors only use entity-level representations for entities trained by SkipGram, so the FIGMENT baseline corresponds to the entity-level result shown as ELR(SKIP) in the tables.
The third baseline is using an existing mention-level entity typing system, FIGER BIBREF13 . FIGER uses a wide variety of features on different levels (including parsing-based features) from contexts of entity mentions as well as the mentions themselves and returns a score for each mention-type instance in the corpus. We provide the ClueWeb/FACC1 segmentation of entities, so FIGER does not need to recognize entities. We use the trained model provided by the authors and normalize FIGER scores using softmax to make them comparable for aggregation. We experimented with different aggregation functions (including maximum and k-largest-scores for a type), but we use the average of scores since it gave us the best result on dev. We call this baseline AGG-FIGER.
Distributional embeddings. For WWLR and ELR, we use SkipGram model in word2vec and SSkip model in wang2vec BIBREF2 to learn embeddings for words, entities and types. To obtain embeddings for all three in the same space, we process ClueWeb/FACC1 as follows. For each sentence $s$ , we add three copies: $s$ itself, a copy of $s$ in which each entity is replaced with its Freebase identifier (MID) and a copy in which each entity (not test entities though) is replaced with an ID indicating its notable type. The resulting corpus contains around 4 billion tokens and 1.5 billion types.
We run SKIP and SSkip with the same setup (200 dimensions, 10 negative samples, window size 5, word frequency threshold of 100) on this corpus to learn embeddings for words, entities and FIGER types. Having entities and types in the same vector space, we can add another feature vector $\vec{v}(e) \in \mathbb {R}^{|T|}$ (referred to as TC below): for each entity, we compute cosine similarity of its entity vector with all type vectors.
For SWLR, we use fasttext to learn word embeddings from the ClueWeb/FACC1 corpus. We use similar settings as our WWLR SKIP and SSkip embeddings and keep the defaults of other hyperparameters. Since the trained model of fasttext is applicable for new words, we apply the model to get embeddings for the filtered rare words as well.
Our hyperparameter values are given in Table 1 . The values are optimized on dev. We use AdaGrad and minibatch training. For each experiment, we select the best model on dev.
We use these evaluation measures: (i) accuracy: an entity is correct if all its types and no incorrect types are assigned to it; (ii) micro average $F_1$ : $F_1$ of all type-entity assignment decisions; (iii) entity macro average $F_1$ : $F_1$ of types assigned to an entity, averaged over entities; (iv) type macro average $F_1$ : $F_1$ of entities assigned to a type, averaged over types.
The assignment decision is based on thresholding the probability function $P(t|e)$ . For each model and type, we select the threshold that maximizes $F_1$ of entities assigned to the type on dev.
Results
Table 2 gives results on the test entities for all (about 60,000 entities), head (frequency $>$ 100; about 12,200) and tail (frequency $<$ 5; about 10,000). MFT (line 1) is the most frequent type baseline that ranks types according to their frequency in the train entities. Each level of representation is separated with dashed lines, and – unless noted otherwise – the best of each level is joined in multi level representations.
Character-level models are on lines 2-6. The order of systems is: CNN $>$ NSL $>$ BiLSTM $>$ LSTM $>$ FORWARD. The results show that complex neural networks are more effective than simple forwarding. BiLSTM works better than LSTM, confirming other related work. CNNs probably work better than LSTMs because there are few complex non-local dependencies in the sequence, but many important local features. CNNs with maxpooling can more straightforwardly capture local and position-independent features. CNN also beats NSL baseline; a possible reason is that CNN – an automatic method of feature learning – is more robust than hand engineered feature based NSL. We show more detailed results in Section "Analysis" .
Word-level models are on lines 7-10. BOW performs worse than WWLR because it cannot deal well with sparseness. SSKIP uses word order information in WWLR and performs better than SKIP. SWLR uses subword information and performs better than WWLR, especially for tail entities. Integrating subword information improves the quality of embeddings for rare words and mitigates the problem of unknown words.
Joint word-character level models are on lines 11-13. WWLR+CLR(CNN) and SWLR+CLR(CNN) beat the component models. This confirms our underlying assumption in designing the complementary multi-level models. BOW problem with rare words does not allow its joint model with NSL to work better than NSL. WWLR+CLR(CNN) works better than BOW+CLR(NSL) by 10% micro $F_1$ , again due to the limits of BOW compared to WWLR. Interestingly WWLR+CLR works better than SWLR+CLR and this suggests that WWLR is indeed richer than SWLR when CLR mitigates its problem with rare/unknown words
Entity-level models are on lines 14–15 and they are better than all previous models on lines 1–13. This shows the power of entity-level embeddings. In Figure 4 , a t-SNE BIBREF22 visualization of ELR(SKIP) embeddings using different colors for entity types shows that entities of the same type are clustered together. SSKIP works marginally better than SKIP for ELR, especially for tail entities, confirming our hypothesis that order information is important for a good distributional entity representation. This is also confirming the results of derata16acl, where they also get better entity typing results with SSKIP compared to SKIP. They propose to use entity typing as an extrinsic evaluation for embedding models.
Joint entity, word, and character level models are on lines 16-23. The AGG-FIGER baseline works better than the systems on lines 1-13, but worse than ELRs. This is probably due to the fact that AGG-FIGER is optimized for mention typing and it is trained using distant supervision assumption. Parallel to our work, ourjoint2017 optimize a mention typing model for our entity typing task by introducing multi instance learning algorithms, resulting comparable performance to ELR(SKIP). We will investigate their method in future.
Joining CLR with ELR (line 17) results in large improvements, especially for tail entities (5% micro $F_1$ ). This demonstrates that for rare entities, contextual information is often not sufficient for an informative representation, hence name features are important. This is also true for the joint models of WWLR/SWLR and ELR (lines 18-19). Joining WWLR works better than CLR, and SWLR is slightly better than WWLR. Joint models of WWLR/SWLR with ELR+CLR gives more improvements, and SWLR is again slightly better than WWLR. ELR+WWLR+CLR and ELR+SWLR+CLR, are better than their two-level counterparts, again confirming that these levels are complementary.
We get a further boost, especially for tail entities, by also including TC (type cosine) in the combinations (lines 22-23). This demonstrates the potential advantage of having a common representation space for entities and types. Our best model, ELR+SWLR+CLR+TC (line 22), which we refer to as MuLR in the other tables, beats our initial baselines (ELR and AGG-FIGER) by large margins, e.g., in tail entities improvements are more than 8% in micro F1.
Table 2 shows type macro $F_1$ for MuLR (ELR+SWLR+CLR+TC) and two baselines. There are 11 head types (those with $\ge $ 3000 train entities) and 36 tail types (those with $<$ 200 train entities). These results again confirm the superiority of our multi-level models over the baselines: AGG-FIGER and ELR, the best single-level model baseline.
Analysis
Unknown vs. known entities. To analyze the complementarity of character and word level representations, as well as more fine-grained comparison of our models and the baselines, we divide test entities into known entities – at least one word of the entity's name appears in a train entity – and unknown entities (the complement). There are 45,000 (resp. 15,000) known (resp. unknown) test entities.
Table 2 shows that the CNN works only slightly better (by 0.3%) than NSL on known entities, but works much better on unknown entities (by 3.3%), justifying our preference for deep learning CLR models. As expected, BOW works relatively well for known entities and really poorly for unknown entities. SWLR beats CLR models as well as BOW. The reason is that in our setup, word embeddings are induced on the entire corpus using an unsupervised algorithm. Thus, even for many words that did not occur in train, SWLR has access to informative representations of words. The joint model, SWLR+CLR(CNN), is significantly better than BOW+CLR(NSL) again due to limits of BOW. SWLR+CLR(CNN) is better than SWLR in unknown entities.
Case study of living-thing. To understand the interplay of different levels better, we perform a case study of the type living-thing. Living beings that are not humans belong to this type.
WLRs incorrectly assign “Walter Leaf” (person) and “Along Came A Spider” (music) to living-thing because these names contain a word referring to a living-thing (“leaf”, “spider”), but the entity itself is not a living-thing. In these cases, the averaging of embeddings that WLR performs is misleading. The CLR(CNN) types these two entities correctly because their names contain character ngram/shape patterns that are indicative of person and music.
ELR incorrectly assigns “Zumpango” (city) and “Lake Kasumigaura” (location) to living-thing because these entities are rare and words associated with living things (e.g., “wildlife”) dominate in their contexts. However, CLR(CNN) and WLR enable the joint model to type the two entites correctly: “Zumpango” because of the informative suffix “-go” and “Lake Kasumigaura” because of the informative word “Lake”.
While some of the remaining errors of our best system MuLR are due to the inherent difficulty of entity typing (e.g., it is difficult to correctly type a one-word entity that occurs once and whose name is not informative), many other errors are due to artifacts of our setup. First, ClueWeb/FACC1 is the result of an automatic entity linking system and any entity linking errors propagate to our models. Second, due to the incompleteness of Freebase BIBREF9 , many entities in the FIGMENT dataset are incompletely annotated, resulting in correctly typed entities being evaluated as incorrect.
Adding another source: description-based embeddings. While in this paper, we focus on the contexts and names of entities, there is a textual source of information about entities in KBs which we can also make use of: descriptions of entities. We extract Wikipedia descriptions of FIGMENT entities filtering out the entities ( $\sim $ 40,000 out of $\sim $ 200,000) without description.
We then build a simple entity representation by averaging the embeddings of the top $k$ words (wrt tf-idf) of the description (henceforth, AVG-DES). This representation is used as input in Figure 1 to train the MLP. We also train our best multi-level model as well as the joint of the two on this smaller dataset. Since the descriptions are coming from Wikipedia, we use 300-dimensional Glove BIBREF23 embeddings pretrained on Wikipdia+Gigaword to get more coverage of words. For MuLR, we still use the embeddings we trained before.
Results are shown in Table 3 . While for head entities, MuLR works marginally better, the difference is very small in tail entities. The joint model of the two (by concatenation of vectors) improves the micro F1, with clear boost for tail entities. This suggests that for tail entities, the contextual and name information is not enough by itself and some keywords from descriptions can be really helpful. Integrating more complex description-based embeddings, e.g., by using CNN BIBREF24 , may improve the results further. We leave it for future work.
Conclusion
In this paper, we have introduced representations of entities on different levels: character, word and entity. The character level representation is learned from the entity name. The word level representation is computed from the embeddings of the words $w_i$ in the entity name where the embedding of $w_i$ is derived from the corpus contexts of $w_i$ . The entity level representation of entity $e_i$ is derived from the corpus contexts of $e_i$ . Our experiments show that each of these levels contributes complementary information for the task of fine-grained typing of entities. The joint model of all three levels beats the state-of-the-art baseline by large margins. We further showed that extracting some keywords from Wikipedia descriptions of entities, when available, can considerably improve entity representations, especially for rare entities. We believe that our findings can be transferred to other tasks where entity representation matters.
Acknowledgments. This work was supported by DFG (SCHU 2246/8-2). | Wikipedia |
b34c60eb4738e0439523bcc679fe0fe70ceb8bde | b34c60eb4738e0439523bcc679fe0fe70ceb8bde_0 | Q: How is OpenBookQA different from other natural language QA?
Text: Introduction
Natural language based question answering (NLQA) not only involves linguistic understanding, but often involves reasoning with various kinds of knowledge. In recent years, many NLQA datasets and challenges have been proposed, for example, SQuAD BIBREF0 , TriviaQA BIBREF1 and MultiRC BIBREF2 , and each of them have their own focus, sometimes by design and other times by virtue of their development methodology. Many of these datasets and challenges try to mimic human question answering settings. One such setting is open book question answering where humans are asked to answer questions in a setup where they can refer to books and other materials related to their questions. In such a setting, the focus is not on memorization but, as mentioned in BIBREF3 , on “deeper understanding of the materials and its application to new situations BIBREF4 , BIBREF5 .” In BIBREF3 , they propose the OpenBookQA dataset mimicking this setting.
The OpenBookQA dataset has a collection of questions and four answer choices for each question. The dataset comes with 1326 facts representing an open book. It is expected that answering each question requires at least one of these facts. In addition it requires common knowledge. To obtain relevant common knowledge we use an IR system BIBREF6 front end to a set of knowledge rich sentences. Compared to reading comprehension based QA (RCQA) setup where the answers to a question is usually found in the given small paragraph, in the OpenBookQA setup the open book part is much larger (than a small paragraph) and is not complete as additional common knowledge may be required. This leads to multiple challenges. First, finding the relevant facts in an open book (which is much bigger than the small paragraphs in the RCQA setting) is a challenge. Then, finding the relevant common knowledge using the IR front end is an even bigger challenge, especially since standard IR approaches can be misled by distractions. For example, Table 1 shows a sample question from the OpenBookQA dataset. We can see the retrieved missing knowledge contains words which overlap with both answer options A and B. Introduction of such knowledge sentences increases confusion for the question answering model. Finally, reasoning involving both facts from open book, and common knowledge leads to multi-hop reasoning with respect to natural language text, which is also a challenge.
We address the first two challenges and make the following contributions in this paper: (a) We improve on knowledge extraction from the OpenBook present in the dataset. We use semantic textual similarity models that are trained with different datasets for this task; (b) We propose natural language abduction to generate queries for retrieving missing knowledge; (c) We show how to use Information Gain based Re-ranking to reduce distractions and remove redundant information; (d) We provide an analysis of the dataset and the limitations of BERT Large model for such a question answering task.
The current best model on the leaderboard of OpenBookQA is the BERT Large model BIBREF7 . It has an accuracy of 60.4% and does not use external knowledge. Our knowledge selection and retrieval techniques achieves an accuracy of 72%, with a margin of 11.6% on the current state of the art. We study how the accuracy of the BERT Large model varies with varying number of knowledge facts extracted from the OpenBook and through IR.
Related Work
In recent years, several datasets have been proposed for natural language question answering BIBREF0 , BIBREF1 , BIBREF2 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 and many attempts have been made to solve these challenges BIBREF7 , BIBREF14 , BIBREF15 .
Among these, the closest to our work is the work in BIBREF7 which perform QA using fine tuned language model and the works of BIBREF16 , BIBREF17 which performs QA using external knowledge.
Related to our work for extracting missing knowledge are the works of BIBREF18 , BIBREF19 , BIBREF20 which respectively generate a query either by extracting key terms from a question and an answer option or by classifying key terms or by Seq2Seq models to generate key terms. In comparison, we generate queries using the question, an answer option and an extracted fact using natural language abduction.
The task of natural language abduction for natural language understanding has been studied for a long time BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 . However, such works transform the natural language text to a logical form and then use formal reasoning to perform the abduction. On the contrary, our system performs abduction over natural language text without translating the texts to a logical form.
Approach
Our approach involves six main modules: Hypothesis Generation, OpenBook Knowledge Extraction, Abductive Information Retrieval, Information Gain based Re-ranking, Passage Selection and Question Answering. A key aspect of our approach is to accurately hunt the needed knowledge facts from the OpenBook knowledge corpus and hunt missing common knowledge using IR. We explain our approach in the example given in Table 2 .
In Hypothesis Generation, our system generates a hypothesis $\mathbf {H_{ij}}$ for the $i$ th question and $j$ th answer option, where $j \in \lbrace 1,2,3,4\rbrace $ . In OpenBook Knowledge Extraction, our system retrieves appropriate knowledge $\mathbf {F_{ij}}$ for a given hypothesis $\mathbf {H_{ij}}$ using semantic textual similarity, from the OpenBook knowledge corpus $\mathbf {F}$ . In Abductive Information Retrieval, our system abduces missing knowledge from $\mathbf {H_{ij}}$ and $\mathbf {F_{ij}}$ . The system formulates queries to perform IR to retrieve missing knowledge $\mathbf {K_{ij}}$ . With the retrieved $i$0 , $i$1 , Information Gain based Re-ranking and Passage Selection our system creates a knowledge passage $i$2 . In Question Answering, our system uses $i$3 to answer the questions using a BERT Large based MCQ model, similar to its use in solving SWAG BIBREF29 .
Hypothesis Generation
Our system creates a hypothesis for each of the questions and candidate answer options as part of the data preparation phase as shown in the example in Table 2 . The questions in the OpenBookQA dataset are either with wh word or are incomplete statements. To create hypothesis statements for questions with wh words, we use the rule-based model of BIBREF30 . For the rest of the questions, we concatenate the questions with each of the answers to produce the four hypotheses. This has been done for all the training, test and validation sets.
OpenBook Knowledge Extraction
To retrieve a small set of relevant knowledge facts from the knowledge corpus $\mathbf {F}$ , a textual similarity model is trained in a supervised fashion on two different datasets and the results are compared. We use the large-cased BERT BIBREF7 (BERT Large) as the textual similarity model.
We train it on the semantic textual similarity (STS-B) data from the GLUE dataset BIBREF31 . The trained model is then used to retrieve the top ten knowledge facts from corpus $\mathbf {F}$ based on the STS-B scores. The STS-B scores range from 0 to 5.0, with 0 being least similar.
We generate the dataset using the gold OpenBookQA facts from $\mathbf {F}$ for the train and validation set provided. To prepare the train set, we first find the similarity of the OpenBook $\mathbf {F}$ facts with respect to each other using the BERT model trained on STS-B dataset. We assign a score 5.0 for the gold $\mathbf {\hat{F_i}}$ fact for a hypothesis. We then sample different facts from the OpenBook and assign the STS-B similarity scores between the sampled fact and the gold fact $\mathbf {\mathbf {\hat{F}_{i}}}$ as the target score for that fact $\mathbf {F_{ij}}$ and $\mathbf {H_{ij}}$ . For example:
Hypothesis : Frilled sharks and angler fish live far beneath the surface of the ocean, which is why they are known as Deep sea animals. Gold Fact : deep sea animals live deep in the ocean : Score : 5.0 Sampled Facts : coral lives in the ocean : Score : 3.4 a fish lives in water : Score : 2.8
We do this to ensure a balanced target score is present for each hypothesis and fact. We use this trained model to retrieve top ten relevant facts for each $\mathbf {H_{ij}}$ from the knowledge corpus $\mathbf {F}$ .
Question: .. they decide the best way to save money is ? (A) to quit eating lunch out (B) to make more phone calls (C) to buy less with monopoly money (D) to have lunch with friends Knowledge extraction trained with STS-B: using less resources usually causes money to be saved a disperser disperses each season occurs once per year Knowledge extraction trained with OpenBookQA: using less resources usually causes money to be saved decreasing something negative has a positive impact on a thing conserving resources has a positive impact on the environment
Table 3 shows a comparative study of our three approaches for OpenBook knowledge extraction. We show, the number of correct OpenBook knowledge extracted for all of the four answer options using the three approaches TF-IDF, BERT model trained on STS-B data and BERT model Trained on OpenBook data. Apart from that, we also show the count of the number of facts present precisely across the correct answer options. It can be seen that the Precision@N for the BERT model trained on OpenBook data is better than the other models as N increases.
The above example presents the facts retrieved from BERT model trained on OpenBook which are more relevant than the facts retrieved from BERT model trained on STS-B. Both the models were able to find the most relevant fact, but the other facts for STS-B model introduce more distractors and have lesser relevance. The impact of this is visible from the accuracy scores for the QA task in Table 3 . The best performance of the BERT QA model can be seen to be 66.2% using only OpenBook facts.
Natural Language Abduction and IR
To search for the missing knowledge, we need to know what we are missing. We use “abduction” to figure that out. Abduction is a long studied task in AI, where normally, both the observation (hypothesis) and the domain knowledge (known fact) is represented in a formal language from which a logical solver abduces possible explanations (missing knowledge). However, in our case, both the observation and the domain knowledge are given as natural language sentences from which we want to find out a possible missing knowledge, which we will then hunt using IR. For example, one of the hypothesis $\mathbf {H_{ij}}$ is “A red-tailed hawk is searching for prey. It is most likely to swoop down on a gecko.”, and for which the known fact $\mathbf {F_{ij}}$ is “hawks eats lizards”. From this we expect the output of the natural language abduction system to be $\mathbf {K_{ij}}$ or “gecko is a lizard”. We will refer to this as “natural language abduction”.
For natural language abduction, we propose three models, compare them against a baseline model and evaluate each on a downstream question answering task. All the models ignore stop words except the Seq2Seq model. We describe the three models and a baseline model in the subsequent subsections.
We design a simple heuristic based model defined as below: $ K_{ij} = (H_{ij} \cup F_{ij}) \setminus (H_{ij} \cap F_{ij}) \quad \forall j \in \lbrace 1,2,3,4\rbrace $
where $i$ is the $i$ th question, $j$ is the $j$ th option, $H_{ij}$ , $F_{ij}$ , $K_{ij}$ represents set of unique words of each instance of hypothesis, facts retrieved from knowledge corpus $\mathbf {F}$ and abduced missing knowledge of validation and test data respectively.
In the Supervised Bag of Words model, we select words which satisfy the following condition: $ P(w_n \in K_{ij}) > \theta $
where $w_n \in \lbrace H_{ij} \cup F_{ij}\rbrace $ . To elaborate, we learn the probability of a given word $w_n$ from the set of words in $H_{ij} \cup F_{ij}$ belonging to the abduced missing knowledge $K_{ij}$ . We select those words which are above the threshold $\theta $ .
To learn this probability, we create a training and validation dataset where the words similar (cosine similarity using spaCy) BIBREF32 to the words in the gold missing knowledge $\hat{K}_i$ (provided in the dataset) are labelled as positive class and all the other words not present in $\hat{K}_i$ but in $H_{ij} \cup F_{ij}$ are labelled as negative class. Both classes are ensured to be balanced. Finally, we train a binary classifier using BERT Large with one additional feed forward network for classification. We define value for the threshold $\theta $ using the accuracy of the classifier on validation set. $0.4$ was selected as the threshold.
In the final approach, we used the copynet sequence to sequence model BIBREF33 to generate, instead of predict, the missing knowledge given, the hypothesis $\mathbf {H}$ and knowledge fact from the corpus $\mathbf {F}$ . The intuition behind using copynet model is to make use of the copy mechanism to generate essential yet precise (minimizing distractors) information which can help in answering the question. We generate the training and validation dataset using the gold $\mathbf {\hat{K}_i}$ as the target sentence, but we replace out-of-vocabulary words from the target with words similar (cosine similarity using spaCy) BIBREF32 to the words present in $H_{ij} \cup F_{ij}$ . Here, however, we did not remove the stopwords. We choose one, out of multiple generated knowledge based on our model which provided maximum overlap_score, given by $ overlap\_score = \frac{\sum _{i}{count ((\hat{H}_{i} \cup F_{i})\cap K_{i})}}{\sum _{i}{count(\hat{K_{i}})}} $
where $i$ is the $i$ th question, $\hat{H}_{i}$ being the set of unique words of correct hypothesis, $F_{i}$ being the set of unique words from retrieved facts from knowledge corpus $\mathbf {F}$ , $K_{i}$ being the set of unique words of predicted missing knowledge and $\hat{K_i}$ being the set of unique words of the gold missing knowledge .
To see if abduction helps, we compare the above models with a Word Union Model. To extract the candidate words for missing knowledge, we used the set of unique words from both the hypothesis and OpenBook knowledge as candidate keywords. The model can be formally represented with the following: $ K_{ij} = (H_{ij} \cup F_{ij}) \quad \forall j \in \lbrace 1,2,3,4\rbrace $
Information Gain based Re-ranking
In our experiments we observe that, BERT QA model gives a higher score if similar sentences are repeated, leading to wrong classification. Thus, we introduce Information Gain based Re-ranking to remove redundant information.
We use the same BERT Knowledge Extraction model Trained on OpenBookQA data (section "Acknowledgement" ), which is used for extraction of knowledge facts from corpus $\mathbf {F}$ to do an initial ranking of the retrieved missing knowledge $\mathbf {K}$ . The scores of this knowledge extraction model is used as relevancy score, $rel$ . To extract the top ten missing knowledge $\mathbf {K}$ , we define a redundancy score, $red_{ij}$ , as the maximum cosine similarity, $sim$ , between the previously selected missing knowledge, in the previous iterations till $i$ , and the candidate missing knowledge $K_j$ . If the last selected missing knowledge is $K_i$ , then $ red_{ij}(K_j) = max(red_{i-1,j}(K_j), sim(K_i,K_j)) $ $\mathbf {K}$0
For missing knowledge selection, we first take the missing knowledge with the highest $rel$ score. From the subsequent iteration, we compute the redundancy score with the last selected missing knowledge for each of the candidates and then rank them using the updated $rank\_score$ . We select the top ten missing knowledge for each $\mathbf {H_{ij}}$ .
Question Answering
Once the OpenBook knowledge facts $\mathbf {F}$ and missing knowledge $\mathbf {K}$ have been extracted, we move onto the task of answering the questions.
We use BERT Large model for the question answering task. For each question, we create a passage using the extracted facts and missing knowledge and fine-tune the BERT Large model for the QA task with one additional feed-forward layer for classification. The passages for the train dataset were prepared using the knowledge corpus facts, $\mathbf {F}$ . We create a passage using the top N facts, similar to the actual gold fact $\mathbf {\hat{F}_i}$ , for the train set. The similarities were scored using the STS-B trained model (section "Conclusion" ). The passages for the training dataset do not use the gold missing knowledge $\mathbf {\hat{K}_i}$ provided in the dataset. For each of our experiments, we use the same trained model, with passages from different IR models.
The BERT Large model limits passage length to be lesser than equal to 512. This restricts the size of the passage. To be within the restrictions we create a passage for each of the answer options, and score for all answer options against each passage. We refer to this scoring as sum score, defined as follows:
For each answer options, $A_j$ , we create a passage $P_j$ and score against each of the answer options $A_i$ . To compute the final score for the answer, we sum up each individual scores. If $Q$ is the question, the score for the answer is defined as $ Pr(Q,A_i) = \sum _{j=1}^{4}score(P_j,Q,A_i) $
where $score$ is the classification score given by the BERT Large model. The final answer is chosen based on, $ A = \operatornamewithlimits{arg\,max}_A Pr(Q,A_i) $
In the first round, we score each of the answer options using a passage created from the selected knowledge facts from corpus $\mathbf {F}$ . For each question, we ignore the passages of the answer options which are in the bottom two. We refer to this as Passage Selection. In the second round, we score for only those passages which are selected after adding the missing knowledge $\mathbf {K}$ .
We assume that the correct answer has the highest score in each round. Therefore we multiply the scores obtained after both rounds. We refer to this as Weighted Scoring. We define the combined passage selected scores and weighted scores as follows : $ Pr(\mathbf {F},Q,A_i) = \sum _{j=1}^{4}{score(P_j,Q,A_i)} $
where $P_j$ is the passage created from extracted OpenBook knowledge, F. The top two passages were selected based on the scores of $Pr(\mathbf {F},Q,A_i)$ . $ Pr(\mathbf {F}\cup \mathbf {K},Q,A_i) = \sum _{k=1}^{4}{\delta * score(P_k,Q,A_i)} $
where $\delta =1$ for the top two scores and $\delta =0$ for the rest. $P_k$ is the passage created using both the facts and missing knowledge. The final weighted score is : $ wPr(Q,A_i) = Pr(\mathbf {F},Q,A_i) * Pr(\mathbf {F} \cup \mathbf {K},Q,A_i) $
The answer is chosen based on the top weighted scores as below: $ A = \operatornamewithlimits{arg\,max}_A wPr(Q,A_i) $
Table 4 shows the incremental improvement on the baselines after inclusion of carefully selected knowledge.
Passage Selection and Weighted Scoring are used to overcome the challenge of boosted prediction scores due to cascading effect of errors in each stage.
Question: What eat plants? (A) leopards (B) eagles (C) owls (D) robin Appropriate extracted Fact from $\mathbf {F}$ : some birds eat plants Wrong Extracted Fact from $\mathbf {F}$ : a salamander eats insects Wrong Retrieved Missing Knowledge: Leopard geckos eat mostly insects
For the example shown above, the wrong answer leopards had very low score with only the facts extracted from knowledge corpus $\mathbf {F}$ . But introduction of missing knowledge from the wrong fact from $\mathbf {F}$ boosts its scores, leading to wrong prediction. Passage selection helps in removal of such options and Weighted Scoring gives preference to those answer options whose scores are relatively high before and after inclusion of missing knowledge.
No Passage Selection and Weighted Scoring.
Dataset and Experimental Setup
The dataset of OpenBookQA contains 4957 questions in the train set and 500 multiple choice questions in validation and test respectively. We train a BERT Large based QA model using the top ten knowledge facts from the corpus $\mathbf {F}$ , as a passage for both training and validation set. We select the model which gives the best score for the validation set. The same model is used to score the validation and test set with different passages derived from different methods of Abductive IR. The best Abductive IR model, the number of facts from $\mathbf {F}$ and $\mathbf {K}$ are selected from the best validation scores for the QA task.
Abductive Information Retrieval
We evaluate the abductive IR techniques at different values for number of facts from $\mathbf {F}$ and number of missing knowledge $\mathbf {K}$ extracted using IR. Figure 2 shows the accuracy against different combinations of $\mathbf {F}$ and $\mathbf {K}$ , for all four techniques of IR prior to Information gain based Re-ranking. In general, we noticed that the trained models performed poorly compared to the baselines. The Word Symmetric Difference model performs better, indicating abductive IR helps. The poor performance of the trained models can be attributed to the challenge of learning abductive inference.
For the above example it can be seen, the pre-reranking facts are relevant to the question but contribute very less considering the knowledge facts retrieved from the corpus $\mathbf {F}$ and the correct answer. Figure 3 shows the impact of Information gain based Re-ranking. Removal of redundant data allows the scope of more relevant information being present in the Top N retrieved missing knowledge $\mathbf {K}$ .
Question: A red-tailed hawk is searching for prey. It is most likely to swoop down on what? (A) an eagle (B) a cow (C) a gecko (D) a deer Fact from $\mathbf {F}$ : hawks eats lizards Pre-Reranking $\mathbf {K}$ : red-tail hawk in their search for prey Red-tailed hawks soar over the prairie and woodlands in search of prey. Post-Reranking $\mathbf {K}$ : Geckos - only vocal lizards. Every gecko is a lizard.
Model Analysis
BERT Question Answering model: BERT performs well on this task, but is prone to distractions. Repetition of information leads to boosted prediction scores. BERT performs well for lookup based QA, as in RCQA tasks like SQuAD. But this poses a challenge for Open Domain QA, as the extracted knowledge enables lookup for all answer options, leading to an adversarial setting for lookup based QA. This model is able to find the correct answer, even under the adversarial setting, which is shown by the performance of the sum score to select the answer after passage selection.
Symmetric Difference Model This model improves on the baseline Word Union model by 1-2%. The improvement is dwarfed because of inappropriate domain knowledge from $\mathbf {F}$ being used for abduction. The intersection between the inappropriate domain knowledge and the answer hypothesis is $\mathbf {\varnothing }$ , which leads to queries which are exactly same as the Word Union model.
Supervised learned models The supervised learned models for abduction under-perform. The Bag of Words and the Seq2Seq models fail to extract keywords for many $\mathbf {F}-\mathbf {H}$ pairs, sometimes missing the keywords from the answers. The Seq2Seq model sometimes extracts the exact missing knowledge, for example it generates “some birds is robin” or “lizard is gecko”. This shows there is promise in this approach and the poor performance can be attributed to insufficient train data size, which was 4957 only. A fact verification model might improve the accuracy of the supervised learned models. But, for many questions, it fails to extract proper keywords, copying just a part of the question or the knowledge fact.
Error Analysis
Other than errors due to distractions and failed IR, which were around 85% of the total errors, the errors seen are of four broad categories.
Temporal Reasoning: In the example shown below, even though both the options can be considered as night, the fact that 2:00 AM is more suitable for the bats than 6:00 PM makes it difficult to reason. Such issues accounted for 5% of the errors.
Question: Owls are likely to hunt at? (A) 3:00 PM (B) 2:00 AM (C) 6:00 PM (D) 7:00 AM
Negation: In the example shown below, a model is needed which handles negations specifically to reject incorrect options. Such issues accounted for 1% of the errors.
Question: Which of the following is not an input in photosynthesis? (A) sunlight (B) oxygen (C) water (D) carbon dioxide
Conjunctive Reasoning: In the example as shown below, each answer options are partially correct as the word “ bear” is present. Thus a model has to learn whether all parts of the answer are true or not, i.e Conjunctive Reasoning. Logically, all answers are correct, as we can see an “or”, but option (A) makes more sense. Such issues accounted for 1% of the errors.
Question: Some berries may be eaten by (A) a bear or person (B) a bear or shark (C) a bear or lion (D) a bear or wolf
Qualitative Reasoning: In the example shown below, each answer options would stop a car but option (D) is more suitable since it will stop the car quicker. A deeper qualitative reasoning is needed to reject incorrect options. Such issues accounted for 8% of the errors.
Question: Which of these would stop a car quicker? (A) a wheel with wet brake pads (B) a wheel without brake pads (C) a wheel with worn brake pads (D) a wheel with dry brake pads
Conclusion
In this work, we have pushed the current state of the art for the OpenBookQA task using simple techniques and careful selection of knowledge. We have provided two new ways of performing knowledge extraction over a knowledge base for QA and evaluated three ways to perform abductive inference over natural language. All techniques are shown to improve on the performance of the final task of QA, but there is still a long way to reach human performance.
We analyzed the performance of various components of our QA system. For the natural language abduction, the heuristic technique performs better than the supervised techniques. Our analysis also shows the limitations of BERT based MCQ models, the challenge of learning natural language abductive inference and the multiple types of reasoning required for an OpenBookQA task. Nevertheless, our overall system improves on the state of the art by 11.6%.
Acknowledgement
We thank NSF for the grant 1816039 and DARPA for partially supporting this research. | in the OpenBookQA setup the open book part is much larger, the open book part is much larger (than a small paragraph) and is not complete as additional common knowledge may be required |
9623884915b125d26e13e8eeebe9a0f79d56954b | 9623884915b125d26e13e8eeebe9a0f79d56954b_0 | Q: At what text unit/level were documents processed?
Text: Introduction
Business documents broadly characterize a large class of documents that are central to the operation of business. These include legal contracts, purchase orders, financial statements, regulatory filings, and more. Such documents have a number of characteristics that set them apart from the types of texts that most NLP techniques today are designed to process (Wikipedia articles, news stories, web pages, etc.): They are heterogeneous and frequently contain a mix of both free text as well as semi-structured elements (tables, headings, etc.). They are, by definition, domain specific, often with vocabulary, phrases, and linguistic structures (e.g., legal boilerplate and terms of art) that are rarely seen in general natural language corpora.
Despite these challenges, there is great potential in the application of NLP technologies to business documents. Take, for example, contracts that codify legal agreements between two or more parties. Organizations (particularly large enterprises) need to monitor contracts for a range of tasks, a process that can be partially automated if certain content elements can be extracted from the contracts themselves by systems BIBREF0. In general, if we are able to extract structured entities from business documents, these outputs can be better queried and manipulated, potentially facilitating more efficient business operations.
In this paper, we present BERT-based models for extracting content elements from two very different types of business documents: regulatory filings and property lease agreements. Given the success of deep transformer-based models such as BERT BIBREF1 and their ability to handle sequence labeling tasks, adopting such an approach seemed like an obvious starting point. In this context, we are primarily interested in two questions: First, how data efficient is BERT for fine-tuning to new specialized domains? Specifically, how much annotated data do we need to achieve some (reasonable) level of accuracy? This is an important question due to the heterogeneity of business documents; it would be onerous if organizations were required to engage in large annotation efforts for every type of document. Second, how would a BERT model pre-trained on general natural language corpora perform in specific, and potentially highly-specialized, domains?
There are aspects of this task that make it both easier and more difficult than “traditional” IE. Even though they are expressed in natural language, business documents frequently take constrained forms, sometimes even “template-like” to a certain degree. As such, it may be easy to learn cue phrases and other fixed expressions that indicate the presence of some element (i.e., pattern matching). On the other hand, the structure and vocabulary of the texts may be very different from the types of corpora modern deep models are trained on; for example, researchers have shown that models for processing the scientific literature benefit immensely from pre-training on scientific articles BIBREF2, BIBREF3. Unfortunately, we are not aware of any large, open corpora of business documents for running comparable experiments.
The contribution of our work is twofold: From the scientific perspective, we begin to provide some answers to the above questions. With two case studies, we find that a modest amount of domain-specific annotated data (less than 100 documents) is sufficient to fine-tune BERT to achieve reasonable accuracy in extracting a set of content elements. From a practical perspective, we showcase our efforts in an end-to-end cloud platform that provides an easy-to-use annotation interface as well as an inference interface that allows users to upload documents and inspect the results of our models.
Approach
Within the broad space of business documents, we have decided to focus on two specific types: regulatory filings and property lease agreements. While our approach is not language specific, all our work is conducted on Chinese documents. In this section, we first describe these documents and our corpora, our sequence labeling model, and finally our evaluation approach.
Approach ::: Datasets
Regulatory Filings. We focused on a specific type of filing: disclosures of pledges by shareholders when their shares are offered up for collateral. These are publicly accessible and were gathered from the database of a stock exchange in China. We observe that most of these announcements are fairly formulaic, likely generated by templates. However, we treated them all as natural language text and did not exploit this observation; for example, we made no explicit attempt to induce template structure or apply clustering—although such techniques would likely improve extraction accuracy. In total, we collected and manually annotated 150 filings, which were divided into training, validation, and test sets with a 6:2:2 split. Our test corpus comprises 30 regulatory filings. Table TABREF6 enumerates the seven content elements that we extract.
Property Lease Agreements. These contracts mostly follow a fixed “schema” with a certain number of prescribed elements (leaseholder, tenant, rent, deposit, etc.); Table TABREF7 enumerates the eight elements that our model extracts. Since most property lease agreements are confidential, no public corpus for research exists, and thus we had to build our own. To this end, we searched the web for publicly-available templates of property lease agreements and found 115 templates in total. For each template, we manually generated one, two, or three instances, using a fake data generator tool to fill in the missing content elements such as addresses. In total, we created (and annotated) 223 contracts by hand. This corpus was further split into training, validation, and test data with a 6:2:2 split. Our test set contains 44 lease agreements, 11 of which use templates that are not seen in the training set. We report evaluation over both the full test set and on only these unseen templates; the latter condition specifically probes our model's ability to generalize.
Approach ::: Model
An obvious approach to content element extraction is to formulate the problem as a sequence labeling task. Prior to the advent of neural networks, Conditional Random Fields (CRFs) BIBREF4, BIBREF5 represented the most popular approach to this task. Starting from a few years ago, neural networks have become the dominant approach, starting with RNNs BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11. Most recently, deep transformer-based models such as BERT represent the state of the art in this task BIBREF1, BIBREF12, BIBREF13 . We adopt the sequence labeling approach of BIBREF1, based on annotations of our corpus using a standard BIO tagging scheme with respect to the content elements we are interested in.
We extend BERT Base-Chinese (12-layer, 768-hidden, 12-heads, 110M parameters) for sequence labeling. All documents are segmented into paragraphs and processed at the paragraph level (both training and inference); this is acceptable because we observe that most paragraphs are less than 200 characters. The input sequences are segmented by the BERT tokenizer, with the special [CLS] token inserted at the beginning and the special [SEP] token added at the end. All inputs are then padded to a length of 256 tokens. After feeding through BERT, we obtain the hidden state of the final layer, denoted as ($h_{1}$, $h_{2}$, ... $h_{N}$) where $N$ is the max length setting. We add a fully-connected layer and softmax on top, and the final prediction is formulated as:
where ${W}$ represents the parameter of the fully-connected layer and ${b}$ is the bias. The learning objective is to maximize
For simplicity, we assume that all tokens can be predicted independently. For model training, we set the max sequence length to 256, the learning rate to ${10^{-4}}$, and run the model for 8 epochs. We use all other default settings in the TensorFlow implementation of BERT.
UTF8gbsn
UTF8gbsn
UTF8gbsn
Approach ::: Inference and Evaluation
At inference time, documents from the test set are segmented into paragraphs and fed into the fine-tuned BERT model one at a time. Typically, sequence labeling tasks are evaluated in terms of precision, recall, and F$_1$ at the entity level, per sentence. However, such an evaluation is inappropriate for our task because the content elements represent properties of the entire document as a whole, not individual sentences.
Instead, we adopted the following evaluation procedure: For each content element type (e.g., “tenant”), we extract all tagged spans from the document, and after deduplication, treat the entities as a set that we then measure against the ground truth in terms of precision, recall, and F$_1$. We do this because there may be multiple ground truth entities and BERT may mark multiple spans in a document with a particular entity type. Note that the metrics are based on exact matches—this means that, for example, if the extracted entity has an extraneous token compared to a ground truth entity, the system receives no credit.
Results
Our main results are presented in Table TABREF6 on the test set of the regulatory filings and in Table TABREF7 on the test set of the property lease agreements; F$_1$, precision, and recall are computed in the manner described above. We show metrics across all content elements (micro-averaged) as well as broken down by types. For the property lease agreements, we show results on all documents (left) and only over those with unseen templates (right). Examining these results, we see that although there is some degradation in effectiveness between all documents and only unseen templates, it appears that BERT is able to generalize to previously-unseen expressions of the content elements. Specifically, it is not the case that the model is simply memorizing fixed patterns or key phrases—otherwise, we could just craft a bunch of regular expression patterns for this task. This is a nice result that shows off the power of modern neural NLP models.
Overall, we would characterize our models as achieving reasonable accuracy, comparable to extraction tasks in more “traditional” domains, with modest amounts of training data. It does appear that with fine tuning, BERT is able to adapt to the linguistic characteristics of these specialized types of documents. For example, the regulatory filings have quite specialized vocabulary and the property lease agreements have numeric heading structures—BERT does not seem to be confused by these elements, which for the most part do not appear in the texts that the model was pre-trained on. Naturally, accuracy varies across different content elements: For the rental agreements, entities such as leaseholder, tenant, start date, and end date perform much better than others. For the regulatory filing, the model performs well on all content elements except for one; there were very few examples of “% of pledged shares in the shareholder's total share holdings” in our training data, and thus accuracy is very low despite the fact that percentages are straightforward to identify. It seems that “easy” entities often have more fixed forms and are quite close to entities that the model may have encountered during pre-training (e.g., names and dates). In contrast, “difficult” elements are often domain-specific and widely vary in their forms.
How data efficient is BERT when fine tuning on annotated data? We can answer this question by varying the amount of training data used to fine tune the BERT models, holding everything else constant. These results are shown in Figure FIGREF10 for the regulatory filings (30, 60, 90 randomly-selected documents) and in Figure FIGREF11 for the property lease agreements (30, 60, 90, 120 randomly-selected documents); in all cases, the development set is fixed. For brevity, we only show F$_1$ scores, but we observe similar trends for the other metrics. For both document types, it seems like 60–90 documents are sufficient to achieve F$_1$ on par with using all available training data. Beyond this point, we hit rapidly diminishing returns. For a number of “easy” content elements (e.g., dates in the property lease agreements), it seems like 30 documents are sufficient to achieve good accuracy, and more does not appear to yield substantial improvements. Note that in a few cases, training on more data actually decreases F$_1$ slightly, but this can be attributed to noise in the sampling process.
Finally, in Table TABREF8 we show an excerpt from each type of document along with the content elements that are extracted by our BERT models. We provide both the original source Chinese texts as well as English translations to provide the reader with a general sense of the source documents and how well our models behave.
Cloud Platform
All the capabilities described in this paper come together in an end-to-end cloud-based platform that we have built. The platform has two main features: First, it provides an annotation interface that allows users to define content elements, upload documents, and annotate documents; a screenshot is shown in Figure FIGREF12. We have invested substantial effort in making the interface as easy to use as possible; for example, annotating content elements is as easy as selecting text from the document. Our platform is able to ingest documents in a variety of formats, including PDFs and Microsoft Word, and converts these formats into plain text before presenting them to the annotators.
The second feature of the platform is the ability for users to upload new documents and apply inference on them using a fine-tuned BERT model; a screenshot of this feature is shown in Figure FIGREF13. The relevant content elements are highlighted in the document.
On the cloud platform, the inference module also applies a few simple rule-based modifications to post-process BERT extraction results. For any of the extracted dates, we further applied a date parser based on rules and regular expressions to normalize and canonicalize the extracted outputs. In the regulatory filings, we tried to normalize numbers that were written in a mixture of Arabic numerals and Chinese units (e.g., “UTF8gbsn亿”, the unit for $10^8$) and discarded partial results if simple rule-based rewrites were not successful. In the property lease agreements, the contract length, if not directly extracted by BERT, is computed from the extracted start and end dates. Note that these post processing steps were not applied in the evaluation presented in the previous section, and so the figures reported in Tables TABREF6 and TABREF7 actually under-report the accuracy of our models in a real-world setting.
Conclusions
This work tackles the challenge of content extraction from two types of business documents, regulatory filings and property lease agreements. The problem is straightforwardly formulated as a sequence labeling task, and we fine-tune BERT for this application. We show that our simple models can achieve reasonable accuracy with only modest amounts of training data, illustrating the power and flexibility of modern NLP models. Our cloud platform pulls these models together in an easy-to-use interface for addressing real-world business needs. | documents are segmented into paragraphs and processed at the paragraph level |
77db56fee07b01015a74413ca31f19bea7203f0b | 77db56fee07b01015a74413ca31f19bea7203f0b_0 | Q: What evaluation metric were used for presenting results?
Text: Introduction
Business documents broadly characterize a large class of documents that are central to the operation of business. These include legal contracts, purchase orders, financial statements, regulatory filings, and more. Such documents have a number of characteristics that set them apart from the types of texts that most NLP techniques today are designed to process (Wikipedia articles, news stories, web pages, etc.): They are heterogeneous and frequently contain a mix of both free text as well as semi-structured elements (tables, headings, etc.). They are, by definition, domain specific, often with vocabulary, phrases, and linguistic structures (e.g., legal boilerplate and terms of art) that are rarely seen in general natural language corpora.
Despite these challenges, there is great potential in the application of NLP technologies to business documents. Take, for example, contracts that codify legal agreements between two or more parties. Organizations (particularly large enterprises) need to monitor contracts for a range of tasks, a process that can be partially automated if certain content elements can be extracted from the contracts themselves by systems BIBREF0. In general, if we are able to extract structured entities from business documents, these outputs can be better queried and manipulated, potentially facilitating more efficient business operations.
In this paper, we present BERT-based models for extracting content elements from two very different types of business documents: regulatory filings and property lease agreements. Given the success of deep transformer-based models such as BERT BIBREF1 and their ability to handle sequence labeling tasks, adopting such an approach seemed like an obvious starting point. In this context, we are primarily interested in two questions: First, how data efficient is BERT for fine-tuning to new specialized domains? Specifically, how much annotated data do we need to achieve some (reasonable) level of accuracy? This is an important question due to the heterogeneity of business documents; it would be onerous if organizations were required to engage in large annotation efforts for every type of document. Second, how would a BERT model pre-trained on general natural language corpora perform in specific, and potentially highly-specialized, domains?
There are aspects of this task that make it both easier and more difficult than “traditional” IE. Even though they are expressed in natural language, business documents frequently take constrained forms, sometimes even “template-like” to a certain degree. As such, it may be easy to learn cue phrases and other fixed expressions that indicate the presence of some element (i.e., pattern matching). On the other hand, the structure and vocabulary of the texts may be very different from the types of corpora modern deep models are trained on; for example, researchers have shown that models for processing the scientific literature benefit immensely from pre-training on scientific articles BIBREF2, BIBREF3. Unfortunately, we are not aware of any large, open corpora of business documents for running comparable experiments.
The contribution of our work is twofold: From the scientific perspective, we begin to provide some answers to the above questions. With two case studies, we find that a modest amount of domain-specific annotated data (less than 100 documents) is sufficient to fine-tune BERT to achieve reasonable accuracy in extracting a set of content elements. From a practical perspective, we showcase our efforts in an end-to-end cloud platform that provides an easy-to-use annotation interface as well as an inference interface that allows users to upload documents and inspect the results of our models.
Approach
Within the broad space of business documents, we have decided to focus on two specific types: regulatory filings and property lease agreements. While our approach is not language specific, all our work is conducted on Chinese documents. In this section, we first describe these documents and our corpora, our sequence labeling model, and finally our evaluation approach.
Approach ::: Datasets
Regulatory Filings. We focused on a specific type of filing: disclosures of pledges by shareholders when their shares are offered up for collateral. These are publicly accessible and were gathered from the database of a stock exchange in China. We observe that most of these announcements are fairly formulaic, likely generated by templates. However, we treated them all as natural language text and did not exploit this observation; for example, we made no explicit attempt to induce template structure or apply clustering—although such techniques would likely improve extraction accuracy. In total, we collected and manually annotated 150 filings, which were divided into training, validation, and test sets with a 6:2:2 split. Our test corpus comprises 30 regulatory filings. Table TABREF6 enumerates the seven content elements that we extract.
Property Lease Agreements. These contracts mostly follow a fixed “schema” with a certain number of prescribed elements (leaseholder, tenant, rent, deposit, etc.); Table TABREF7 enumerates the eight elements that our model extracts. Since most property lease agreements are confidential, no public corpus for research exists, and thus we had to build our own. To this end, we searched the web for publicly-available templates of property lease agreements and found 115 templates in total. For each template, we manually generated one, two, or three instances, using a fake data generator tool to fill in the missing content elements such as addresses. In total, we created (and annotated) 223 contracts by hand. This corpus was further split into training, validation, and test data with a 6:2:2 split. Our test set contains 44 lease agreements, 11 of which use templates that are not seen in the training set. We report evaluation over both the full test set and on only these unseen templates; the latter condition specifically probes our model's ability to generalize.
Approach ::: Model
An obvious approach to content element extraction is to formulate the problem as a sequence labeling task. Prior to the advent of neural networks, Conditional Random Fields (CRFs) BIBREF4, BIBREF5 represented the most popular approach to this task. Starting from a few years ago, neural networks have become the dominant approach, starting with RNNs BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11. Most recently, deep transformer-based models such as BERT represent the state of the art in this task BIBREF1, BIBREF12, BIBREF13 . We adopt the sequence labeling approach of BIBREF1, based on annotations of our corpus using a standard BIO tagging scheme with respect to the content elements we are interested in.
We extend BERT Base-Chinese (12-layer, 768-hidden, 12-heads, 110M parameters) for sequence labeling. All documents are segmented into paragraphs and processed at the paragraph level (both training and inference); this is acceptable because we observe that most paragraphs are less than 200 characters. The input sequences are segmented by the BERT tokenizer, with the special [CLS] token inserted at the beginning and the special [SEP] token added at the end. All inputs are then padded to a length of 256 tokens. After feeding through BERT, we obtain the hidden state of the final layer, denoted as ($h_{1}$, $h_{2}$, ... $h_{N}$) where $N$ is the max length setting. We add a fully-connected layer and softmax on top, and the final prediction is formulated as:
where ${W}$ represents the parameter of the fully-connected layer and ${b}$ is the bias. The learning objective is to maximize
For simplicity, we assume that all tokens can be predicted independently. For model training, we set the max sequence length to 256, the learning rate to ${10^{-4}}$, and run the model for 8 epochs. We use all other default settings in the TensorFlow implementation of BERT.
UTF8gbsn
UTF8gbsn
UTF8gbsn
Approach ::: Inference and Evaluation
At inference time, documents from the test set are segmented into paragraphs and fed into the fine-tuned BERT model one at a time. Typically, sequence labeling tasks are evaluated in terms of precision, recall, and F$_1$ at the entity level, per sentence. However, such an evaluation is inappropriate for our task because the content elements represent properties of the entire document as a whole, not individual sentences.
Instead, we adopted the following evaluation procedure: For each content element type (e.g., “tenant”), we extract all tagged spans from the document, and after deduplication, treat the entities as a set that we then measure against the ground truth in terms of precision, recall, and F$_1$. We do this because there may be multiple ground truth entities and BERT may mark multiple spans in a document with a particular entity type. Note that the metrics are based on exact matches—this means that, for example, if the extracted entity has an extraneous token compared to a ground truth entity, the system receives no credit.
Results
Our main results are presented in Table TABREF6 on the test set of the regulatory filings and in Table TABREF7 on the test set of the property lease agreements; F$_1$, precision, and recall are computed in the manner described above. We show metrics across all content elements (micro-averaged) as well as broken down by types. For the property lease agreements, we show results on all documents (left) and only over those with unseen templates (right). Examining these results, we see that although there is some degradation in effectiveness between all documents and only unseen templates, it appears that BERT is able to generalize to previously-unseen expressions of the content elements. Specifically, it is not the case that the model is simply memorizing fixed patterns or key phrases—otherwise, we could just craft a bunch of regular expression patterns for this task. This is a nice result that shows off the power of modern neural NLP models.
Overall, we would characterize our models as achieving reasonable accuracy, comparable to extraction tasks in more “traditional” domains, with modest amounts of training data. It does appear that with fine tuning, BERT is able to adapt to the linguistic characteristics of these specialized types of documents. For example, the regulatory filings have quite specialized vocabulary and the property lease agreements have numeric heading structures—BERT does not seem to be confused by these elements, which for the most part do not appear in the texts that the model was pre-trained on. Naturally, accuracy varies across different content elements: For the rental agreements, entities such as leaseholder, tenant, start date, and end date perform much better than others. For the regulatory filing, the model performs well on all content elements except for one; there were very few examples of “% of pledged shares in the shareholder's total share holdings” in our training data, and thus accuracy is very low despite the fact that percentages are straightforward to identify. It seems that “easy” entities often have more fixed forms and are quite close to entities that the model may have encountered during pre-training (e.g., names and dates). In contrast, “difficult” elements are often domain-specific and widely vary in their forms.
How data efficient is BERT when fine tuning on annotated data? We can answer this question by varying the amount of training data used to fine tune the BERT models, holding everything else constant. These results are shown in Figure FIGREF10 for the regulatory filings (30, 60, 90 randomly-selected documents) and in Figure FIGREF11 for the property lease agreements (30, 60, 90, 120 randomly-selected documents); in all cases, the development set is fixed. For brevity, we only show F$_1$ scores, but we observe similar trends for the other metrics. For both document types, it seems like 60–90 documents are sufficient to achieve F$_1$ on par with using all available training data. Beyond this point, we hit rapidly diminishing returns. For a number of “easy” content elements (e.g., dates in the property lease agreements), it seems like 30 documents are sufficient to achieve good accuracy, and more does not appear to yield substantial improvements. Note that in a few cases, training on more data actually decreases F$_1$ slightly, but this can be attributed to noise in the sampling process.
Finally, in Table TABREF8 we show an excerpt from each type of document along with the content elements that are extracted by our BERT models. We provide both the original source Chinese texts as well as English translations to provide the reader with a general sense of the source documents and how well our models behave.
Cloud Platform
All the capabilities described in this paper come together in an end-to-end cloud-based platform that we have built. The platform has two main features: First, it provides an annotation interface that allows users to define content elements, upload documents, and annotate documents; a screenshot is shown in Figure FIGREF12. We have invested substantial effort in making the interface as easy to use as possible; for example, annotating content elements is as easy as selecting text from the document. Our platform is able to ingest documents in a variety of formats, including PDFs and Microsoft Word, and converts these formats into plain text before presenting them to the annotators.
The second feature of the platform is the ability for users to upload new documents and apply inference on them using a fine-tuned BERT model; a screenshot of this feature is shown in Figure FIGREF13. The relevant content elements are highlighted in the document.
On the cloud platform, the inference module also applies a few simple rule-based modifications to post-process BERT extraction results. For any of the extracted dates, we further applied a date parser based on rules and regular expressions to normalize and canonicalize the extracted outputs. In the regulatory filings, we tried to normalize numbers that were written in a mixture of Arabic numerals and Chinese units (e.g., “UTF8gbsn亿”, the unit for $10^8$) and discarded partial results if simple rule-based rewrites were not successful. In the property lease agreements, the contract length, if not directly extracted by BERT, is computed from the extracted start and end dates. Note that these post processing steps were not applied in the evaluation presented in the previous section, and so the figures reported in Tables TABREF6 and TABREF7 actually under-report the accuracy of our models in a real-world setting.
Conclusions
This work tackles the challenge of content extraction from two types of business documents, regulatory filings and property lease agreements. The problem is straightforwardly formulated as a sequence labeling task, and we fine-tune BERT for this application. We show that our simple models can achieve reasonable accuracy with only modest amounts of training data, illustrating the power and flexibility of modern NLP models. Our cloud platform pulls these models together in an easy-to-use interface for addressing real-world business needs. | F$_1$, precision, and recall |
c309e87c9e08cf847f31e554577d6366faec1ea0 | c309e87c9e08cf847f31e554577d6366faec1ea0_0 | Q: Was the structure of regulatory filings exploited when training the model?
Text: Introduction
Business documents broadly characterize a large class of documents that are central to the operation of business. These include legal contracts, purchase orders, financial statements, regulatory filings, and more. Such documents have a number of characteristics that set them apart from the types of texts that most NLP techniques today are designed to process (Wikipedia articles, news stories, web pages, etc.): They are heterogeneous and frequently contain a mix of both free text as well as semi-structured elements (tables, headings, etc.). They are, by definition, domain specific, often with vocabulary, phrases, and linguistic structures (e.g., legal boilerplate and terms of art) that are rarely seen in general natural language corpora.
Despite these challenges, there is great potential in the application of NLP technologies to business documents. Take, for example, contracts that codify legal agreements between two or more parties. Organizations (particularly large enterprises) need to monitor contracts for a range of tasks, a process that can be partially automated if certain content elements can be extracted from the contracts themselves by systems BIBREF0. In general, if we are able to extract structured entities from business documents, these outputs can be better queried and manipulated, potentially facilitating more efficient business operations.
In this paper, we present BERT-based models for extracting content elements from two very different types of business documents: regulatory filings and property lease agreements. Given the success of deep transformer-based models such as BERT BIBREF1 and their ability to handle sequence labeling tasks, adopting such an approach seemed like an obvious starting point. In this context, we are primarily interested in two questions: First, how data efficient is BERT for fine-tuning to new specialized domains? Specifically, how much annotated data do we need to achieve some (reasonable) level of accuracy? This is an important question due to the heterogeneity of business documents; it would be onerous if organizations were required to engage in large annotation efforts for every type of document. Second, how would a BERT model pre-trained on general natural language corpora perform in specific, and potentially highly-specialized, domains?
There are aspects of this task that make it both easier and more difficult than “traditional” IE. Even though they are expressed in natural language, business documents frequently take constrained forms, sometimes even “template-like” to a certain degree. As such, it may be easy to learn cue phrases and other fixed expressions that indicate the presence of some element (i.e., pattern matching). On the other hand, the structure and vocabulary of the texts may be very different from the types of corpora modern deep models are trained on; for example, researchers have shown that models for processing the scientific literature benefit immensely from pre-training on scientific articles BIBREF2, BIBREF3. Unfortunately, we are not aware of any large, open corpora of business documents for running comparable experiments.
The contribution of our work is twofold: From the scientific perspective, we begin to provide some answers to the above questions. With two case studies, we find that a modest amount of domain-specific annotated data (less than 100 documents) is sufficient to fine-tune BERT to achieve reasonable accuracy in extracting a set of content elements. From a practical perspective, we showcase our efforts in an end-to-end cloud platform that provides an easy-to-use annotation interface as well as an inference interface that allows users to upload documents and inspect the results of our models.
Approach
Within the broad space of business documents, we have decided to focus on two specific types: regulatory filings and property lease agreements. While our approach is not language specific, all our work is conducted on Chinese documents. In this section, we first describe these documents and our corpora, our sequence labeling model, and finally our evaluation approach.
Approach ::: Datasets
Regulatory Filings. We focused on a specific type of filing: disclosures of pledges by shareholders when their shares are offered up for collateral. These are publicly accessible and were gathered from the database of a stock exchange in China. We observe that most of these announcements are fairly formulaic, likely generated by templates. However, we treated them all as natural language text and did not exploit this observation; for example, we made no explicit attempt to induce template structure or apply clustering—although such techniques would likely improve extraction accuracy. In total, we collected and manually annotated 150 filings, which were divided into training, validation, and test sets with a 6:2:2 split. Our test corpus comprises 30 regulatory filings. Table TABREF6 enumerates the seven content elements that we extract.
Property Lease Agreements. These contracts mostly follow a fixed “schema” with a certain number of prescribed elements (leaseholder, tenant, rent, deposit, etc.); Table TABREF7 enumerates the eight elements that our model extracts. Since most property lease agreements are confidential, no public corpus for research exists, and thus we had to build our own. To this end, we searched the web for publicly-available templates of property lease agreements and found 115 templates in total. For each template, we manually generated one, two, or three instances, using a fake data generator tool to fill in the missing content elements such as addresses. In total, we created (and annotated) 223 contracts by hand. This corpus was further split into training, validation, and test data with a 6:2:2 split. Our test set contains 44 lease agreements, 11 of which use templates that are not seen in the training set. We report evaluation over both the full test set and on only these unseen templates; the latter condition specifically probes our model's ability to generalize.
Approach ::: Model
An obvious approach to content element extraction is to formulate the problem as a sequence labeling task. Prior to the advent of neural networks, Conditional Random Fields (CRFs) BIBREF4, BIBREF5 represented the most popular approach to this task. Starting from a few years ago, neural networks have become the dominant approach, starting with RNNs BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11. Most recently, deep transformer-based models such as BERT represent the state of the art in this task BIBREF1, BIBREF12, BIBREF13 . We adopt the sequence labeling approach of BIBREF1, based on annotations of our corpus using a standard BIO tagging scheme with respect to the content elements we are interested in.
We extend BERT Base-Chinese (12-layer, 768-hidden, 12-heads, 110M parameters) for sequence labeling. All documents are segmented into paragraphs and processed at the paragraph level (both training and inference); this is acceptable because we observe that most paragraphs are less than 200 characters. The input sequences are segmented by the BERT tokenizer, with the special [CLS] token inserted at the beginning and the special [SEP] token added at the end. All inputs are then padded to a length of 256 tokens. After feeding through BERT, we obtain the hidden state of the final layer, denoted as ($h_{1}$, $h_{2}$, ... $h_{N}$) where $N$ is the max length setting. We add a fully-connected layer and softmax on top, and the final prediction is formulated as:
where ${W}$ represents the parameter of the fully-connected layer and ${b}$ is the bias. The learning objective is to maximize
For simplicity, we assume that all tokens can be predicted independently. For model training, we set the max sequence length to 256, the learning rate to ${10^{-4}}$, and run the model for 8 epochs. We use all other default settings in the TensorFlow implementation of BERT.
UTF8gbsn
UTF8gbsn
UTF8gbsn
Approach ::: Inference and Evaluation
At inference time, documents from the test set are segmented into paragraphs and fed into the fine-tuned BERT model one at a time. Typically, sequence labeling tasks are evaluated in terms of precision, recall, and F$_1$ at the entity level, per sentence. However, such an evaluation is inappropriate for our task because the content elements represent properties of the entire document as a whole, not individual sentences.
Instead, we adopted the following evaluation procedure: For each content element type (e.g., “tenant”), we extract all tagged spans from the document, and after deduplication, treat the entities as a set that we then measure against the ground truth in terms of precision, recall, and F$_1$. We do this because there may be multiple ground truth entities and BERT may mark multiple spans in a document with a particular entity type. Note that the metrics are based on exact matches—this means that, for example, if the extracted entity has an extraneous token compared to a ground truth entity, the system receives no credit.
Results
Our main results are presented in Table TABREF6 on the test set of the regulatory filings and in Table TABREF7 on the test set of the property lease agreements; F$_1$, precision, and recall are computed in the manner described above. We show metrics across all content elements (micro-averaged) as well as broken down by types. For the property lease agreements, we show results on all documents (left) and only over those with unseen templates (right). Examining these results, we see that although there is some degradation in effectiveness between all documents and only unseen templates, it appears that BERT is able to generalize to previously-unseen expressions of the content elements. Specifically, it is not the case that the model is simply memorizing fixed patterns or key phrases—otherwise, we could just craft a bunch of regular expression patterns for this task. This is a nice result that shows off the power of modern neural NLP models.
Overall, we would characterize our models as achieving reasonable accuracy, comparable to extraction tasks in more “traditional” domains, with modest amounts of training data. It does appear that with fine tuning, BERT is able to adapt to the linguistic characteristics of these specialized types of documents. For example, the regulatory filings have quite specialized vocabulary and the property lease agreements have numeric heading structures—BERT does not seem to be confused by these elements, which for the most part do not appear in the texts that the model was pre-trained on. Naturally, accuracy varies across different content elements: For the rental agreements, entities such as leaseholder, tenant, start date, and end date perform much better than others. For the regulatory filing, the model performs well on all content elements except for one; there were very few examples of “% of pledged shares in the shareholder's total share holdings” in our training data, and thus accuracy is very low despite the fact that percentages are straightforward to identify. It seems that “easy” entities often have more fixed forms and are quite close to entities that the model may have encountered during pre-training (e.g., names and dates). In contrast, “difficult” elements are often domain-specific and widely vary in their forms.
How data efficient is BERT when fine tuning on annotated data? We can answer this question by varying the amount of training data used to fine tune the BERT models, holding everything else constant. These results are shown in Figure FIGREF10 for the regulatory filings (30, 60, 90 randomly-selected documents) and in Figure FIGREF11 for the property lease agreements (30, 60, 90, 120 randomly-selected documents); in all cases, the development set is fixed. For brevity, we only show F$_1$ scores, but we observe similar trends for the other metrics. For both document types, it seems like 60–90 documents are sufficient to achieve F$_1$ on par with using all available training data. Beyond this point, we hit rapidly diminishing returns. For a number of “easy” content elements (e.g., dates in the property lease agreements), it seems like 30 documents are sufficient to achieve good accuracy, and more does not appear to yield substantial improvements. Note that in a few cases, training on more data actually decreases F$_1$ slightly, but this can be attributed to noise in the sampling process.
Finally, in Table TABREF8 we show an excerpt from each type of document along with the content elements that are extracted by our BERT models. We provide both the original source Chinese texts as well as English translations to provide the reader with a general sense of the source documents and how well our models behave.
Cloud Platform
All the capabilities described in this paper come together in an end-to-end cloud-based platform that we have built. The platform has two main features: First, it provides an annotation interface that allows users to define content elements, upload documents, and annotate documents; a screenshot is shown in Figure FIGREF12. We have invested substantial effort in making the interface as easy to use as possible; for example, annotating content elements is as easy as selecting text from the document. Our platform is able to ingest documents in a variety of formats, including PDFs and Microsoft Word, and converts these formats into plain text before presenting them to the annotators.
The second feature of the platform is the ability for users to upload new documents and apply inference on them using a fine-tuned BERT model; a screenshot of this feature is shown in Figure FIGREF13. The relevant content elements are highlighted in the document.
On the cloud platform, the inference module also applies a few simple rule-based modifications to post-process BERT extraction results. For any of the extracted dates, we further applied a date parser based on rules and regular expressions to normalize and canonicalize the extracted outputs. In the regulatory filings, we tried to normalize numbers that were written in a mixture of Arabic numerals and Chinese units (e.g., “UTF8gbsn亿”, the unit for $10^8$) and discarded partial results if simple rule-based rewrites were not successful. In the property lease agreements, the contract length, if not directly extracted by BERT, is computed from the extracted start and end dates. Note that these post processing steps were not applied in the evaluation presented in the previous section, and so the figures reported in Tables TABREF6 and TABREF7 actually under-report the accuracy of our models in a real-world setting.
Conclusions
This work tackles the challenge of content extraction from two types of business documents, regulatory filings and property lease agreements. The problem is straightforwardly formulated as a sequence labeling task, and we fine-tune BERT for this application. We show that our simple models can achieve reasonable accuracy with only modest amounts of training data, illustrating the power and flexibility of modern NLP models. Our cloud platform pulls these models together in an easy-to-use interface for addressing real-world business needs. | No |
81cee2fc6edd9b7bc65bbf6b4aa35782339e6cff | 81cee2fc6edd9b7bc65bbf6b4aa35782339e6cff_0 | Q: What type of documents are supported by the annotation platform?
Text: Introduction
Business documents broadly characterize a large class of documents that are central to the operation of business. These include legal contracts, purchase orders, financial statements, regulatory filings, and more. Such documents have a number of characteristics that set them apart from the types of texts that most NLP techniques today are designed to process (Wikipedia articles, news stories, web pages, etc.): They are heterogeneous and frequently contain a mix of both free text as well as semi-structured elements (tables, headings, etc.). They are, by definition, domain specific, often with vocabulary, phrases, and linguistic structures (e.g., legal boilerplate and terms of art) that are rarely seen in general natural language corpora.
Despite these challenges, there is great potential in the application of NLP technologies to business documents. Take, for example, contracts that codify legal agreements between two or more parties. Organizations (particularly large enterprises) need to monitor contracts for a range of tasks, a process that can be partially automated if certain content elements can be extracted from the contracts themselves by systems BIBREF0. In general, if we are able to extract structured entities from business documents, these outputs can be better queried and manipulated, potentially facilitating more efficient business operations.
In this paper, we present BERT-based models for extracting content elements from two very different types of business documents: regulatory filings and property lease agreements. Given the success of deep transformer-based models such as BERT BIBREF1 and their ability to handle sequence labeling tasks, adopting such an approach seemed like an obvious starting point. In this context, we are primarily interested in two questions: First, how data efficient is BERT for fine-tuning to new specialized domains? Specifically, how much annotated data do we need to achieve some (reasonable) level of accuracy? This is an important question due to the heterogeneity of business documents; it would be onerous if organizations were required to engage in large annotation efforts for every type of document. Second, how would a BERT model pre-trained on general natural language corpora perform in specific, and potentially highly-specialized, domains?
There are aspects of this task that make it both easier and more difficult than “traditional” IE. Even though they are expressed in natural language, business documents frequently take constrained forms, sometimes even “template-like” to a certain degree. As such, it may be easy to learn cue phrases and other fixed expressions that indicate the presence of some element (i.e., pattern matching). On the other hand, the structure and vocabulary of the texts may be very different from the types of corpora modern deep models are trained on; for example, researchers have shown that models for processing the scientific literature benefit immensely from pre-training on scientific articles BIBREF2, BIBREF3. Unfortunately, we are not aware of any large, open corpora of business documents for running comparable experiments.
The contribution of our work is twofold: From the scientific perspective, we begin to provide some answers to the above questions. With two case studies, we find that a modest amount of domain-specific annotated data (less than 100 documents) is sufficient to fine-tune BERT to achieve reasonable accuracy in extracting a set of content elements. From a practical perspective, we showcase our efforts in an end-to-end cloud platform that provides an easy-to-use annotation interface as well as an inference interface that allows users to upload documents and inspect the results of our models.
Approach
Within the broad space of business documents, we have decided to focus on two specific types: regulatory filings and property lease agreements. While our approach is not language specific, all our work is conducted on Chinese documents. In this section, we first describe these documents and our corpora, our sequence labeling model, and finally our evaluation approach.
Approach ::: Datasets
Regulatory Filings. We focused on a specific type of filing: disclosures of pledges by shareholders when their shares are offered up for collateral. These are publicly accessible and were gathered from the database of a stock exchange in China. We observe that most of these announcements are fairly formulaic, likely generated by templates. However, we treated them all as natural language text and did not exploit this observation; for example, we made no explicit attempt to induce template structure or apply clustering—although such techniques would likely improve extraction accuracy. In total, we collected and manually annotated 150 filings, which were divided into training, validation, and test sets with a 6:2:2 split. Our test corpus comprises 30 regulatory filings. Table TABREF6 enumerates the seven content elements that we extract.
Property Lease Agreements. These contracts mostly follow a fixed “schema” with a certain number of prescribed elements (leaseholder, tenant, rent, deposit, etc.); Table TABREF7 enumerates the eight elements that our model extracts. Since most property lease agreements are confidential, no public corpus for research exists, and thus we had to build our own. To this end, we searched the web for publicly-available templates of property lease agreements and found 115 templates in total. For each template, we manually generated one, two, or three instances, using a fake data generator tool to fill in the missing content elements such as addresses. In total, we created (and annotated) 223 contracts by hand. This corpus was further split into training, validation, and test data with a 6:2:2 split. Our test set contains 44 lease agreements, 11 of which use templates that are not seen in the training set. We report evaluation over both the full test set and on only these unseen templates; the latter condition specifically probes our model's ability to generalize.
Approach ::: Model
An obvious approach to content element extraction is to formulate the problem as a sequence labeling task. Prior to the advent of neural networks, Conditional Random Fields (CRFs) BIBREF4, BIBREF5 represented the most popular approach to this task. Starting from a few years ago, neural networks have become the dominant approach, starting with RNNs BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11. Most recently, deep transformer-based models such as BERT represent the state of the art in this task BIBREF1, BIBREF12, BIBREF13 . We adopt the sequence labeling approach of BIBREF1, based on annotations of our corpus using a standard BIO tagging scheme with respect to the content elements we are interested in.
We extend BERT Base-Chinese (12-layer, 768-hidden, 12-heads, 110M parameters) for sequence labeling. All documents are segmented into paragraphs and processed at the paragraph level (both training and inference); this is acceptable because we observe that most paragraphs are less than 200 characters. The input sequences are segmented by the BERT tokenizer, with the special [CLS] token inserted at the beginning and the special [SEP] token added at the end. All inputs are then padded to a length of 256 tokens. After feeding through BERT, we obtain the hidden state of the final layer, denoted as ($h_{1}$, $h_{2}$, ... $h_{N}$) where $N$ is the max length setting. We add a fully-connected layer and softmax on top, and the final prediction is formulated as:
where ${W}$ represents the parameter of the fully-connected layer and ${b}$ is the bias. The learning objective is to maximize
For simplicity, we assume that all tokens can be predicted independently. For model training, we set the max sequence length to 256, the learning rate to ${10^{-4}}$, and run the model for 8 epochs. We use all other default settings in the TensorFlow implementation of BERT.
UTF8gbsn
UTF8gbsn
UTF8gbsn
Approach ::: Inference and Evaluation
At inference time, documents from the test set are segmented into paragraphs and fed into the fine-tuned BERT model one at a time. Typically, sequence labeling tasks are evaluated in terms of precision, recall, and F$_1$ at the entity level, per sentence. However, such an evaluation is inappropriate for our task because the content elements represent properties of the entire document as a whole, not individual sentences.
Instead, we adopted the following evaluation procedure: For each content element type (e.g., “tenant”), we extract all tagged spans from the document, and after deduplication, treat the entities as a set that we then measure against the ground truth in terms of precision, recall, and F$_1$. We do this because there may be multiple ground truth entities and BERT may mark multiple spans in a document with a particular entity type. Note that the metrics are based on exact matches—this means that, for example, if the extracted entity has an extraneous token compared to a ground truth entity, the system receives no credit.
Results
Our main results are presented in Table TABREF6 on the test set of the regulatory filings and in Table TABREF7 on the test set of the property lease agreements; F$_1$, precision, and recall are computed in the manner described above. We show metrics across all content elements (micro-averaged) as well as broken down by types. For the property lease agreements, we show results on all documents (left) and only over those with unseen templates (right). Examining these results, we see that although there is some degradation in effectiveness between all documents and only unseen templates, it appears that BERT is able to generalize to previously-unseen expressions of the content elements. Specifically, it is not the case that the model is simply memorizing fixed patterns or key phrases—otherwise, we could just craft a bunch of regular expression patterns for this task. This is a nice result that shows off the power of modern neural NLP models.
Overall, we would characterize our models as achieving reasonable accuracy, comparable to extraction tasks in more “traditional” domains, with modest amounts of training data. It does appear that with fine tuning, BERT is able to adapt to the linguistic characteristics of these specialized types of documents. For example, the regulatory filings have quite specialized vocabulary and the property lease agreements have numeric heading structures—BERT does not seem to be confused by these elements, which for the most part do not appear in the texts that the model was pre-trained on. Naturally, accuracy varies across different content elements: For the rental agreements, entities such as leaseholder, tenant, start date, and end date perform much better than others. For the regulatory filing, the model performs well on all content elements except for one; there were very few examples of “% of pledged shares in the shareholder's total share holdings” in our training data, and thus accuracy is very low despite the fact that percentages are straightforward to identify. It seems that “easy” entities often have more fixed forms and are quite close to entities that the model may have encountered during pre-training (e.g., names and dates). In contrast, “difficult” elements are often domain-specific and widely vary in their forms.
How data efficient is BERT when fine tuning on annotated data? We can answer this question by varying the amount of training data used to fine tune the BERT models, holding everything else constant. These results are shown in Figure FIGREF10 for the regulatory filings (30, 60, 90 randomly-selected documents) and in Figure FIGREF11 for the property lease agreements (30, 60, 90, 120 randomly-selected documents); in all cases, the development set is fixed. For brevity, we only show F$_1$ scores, but we observe similar trends for the other metrics. For both document types, it seems like 60–90 documents are sufficient to achieve F$_1$ on par with using all available training data. Beyond this point, we hit rapidly diminishing returns. For a number of “easy” content elements (e.g., dates in the property lease agreements), it seems like 30 documents are sufficient to achieve good accuracy, and more does not appear to yield substantial improvements. Note that in a few cases, training on more data actually decreases F$_1$ slightly, but this can be attributed to noise in the sampling process.
Finally, in Table TABREF8 we show an excerpt from each type of document along with the content elements that are extracted by our BERT models. We provide both the original source Chinese texts as well as English translations to provide the reader with a general sense of the source documents and how well our models behave.
Cloud Platform
All the capabilities described in this paper come together in an end-to-end cloud-based platform that we have built. The platform has two main features: First, it provides an annotation interface that allows users to define content elements, upload documents, and annotate documents; a screenshot is shown in Figure FIGREF12. We have invested substantial effort in making the interface as easy to use as possible; for example, annotating content elements is as easy as selecting text from the document. Our platform is able to ingest documents in a variety of formats, including PDFs and Microsoft Word, and converts these formats into plain text before presenting them to the annotators.
The second feature of the platform is the ability for users to upload new documents and apply inference on them using a fine-tuned BERT model; a screenshot of this feature is shown in Figure FIGREF13. The relevant content elements are highlighted in the document.
On the cloud platform, the inference module also applies a few simple rule-based modifications to post-process BERT extraction results. For any of the extracted dates, we further applied a date parser based on rules and regular expressions to normalize and canonicalize the extracted outputs. In the regulatory filings, we tried to normalize numbers that were written in a mixture of Arabic numerals and Chinese units (e.g., “UTF8gbsn亿”, the unit for $10^8$) and discarded partial results if simple rule-based rewrites were not successful. In the property lease agreements, the contract length, if not directly extracted by BERT, is computed from the extracted start and end dates. Note that these post processing steps were not applied in the evaluation presented in the previous section, and so the figures reported in Tables TABREF6 and TABREF7 actually under-report the accuracy of our models in a real-world setting.
Conclusions
This work tackles the challenge of content extraction from two types of business documents, regulatory filings and property lease agreements. The problem is straightforwardly formulated as a sequence labeling task, and we fine-tune BERT for this application. We show that our simple models can achieve reasonable accuracy with only modest amounts of training data, illustrating the power and flexibility of modern NLP models. Our cloud platform pulls these models together in an easy-to-use interface for addressing real-world business needs. | Variety of formats supported (PDF, Word...), user can define content elements of document |
79620a2b4b121b6d3edd0f7b1d4a8cc7ada0b516 | 79620a2b4b121b6d3edd0f7b1d4a8cc7ada0b516_0 | Q: What are the state-of-the-art models for the task?
Text: Introduction
Disinformation presents a serious threat to society, as the proliferation of fake news can have a significant impact on an individual's perception of reality. Fake news is a claim or story that is fabricated, with the intention to deceive, often for a secondary motive such as economic or political gain BIBREF0. In the age of digital news and social media, fake news can spread rapidly, impacting large amounts of people in a short period of time BIBREF1. To mitigate the negative impact of fake news on society, various organizations now employ personnel to verify dubious claims through a manual fact-checking procedure, however, this process is very laborious. With a fast-paced modern news cycle, many journalists and fact-checkers are under increased stress to be more efficient in their daily work. To assist in this process, automated fact-checking has been proposed as a potential solution BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6.
Automated fact-checking systems aim to assess the veracity of claims through the collection and assessment of news articles and other relevant documents pertaining to the claim at hand. These systems have the potential to augment the work of professional fact-checkers, as well as provide a tool to the public to verify claims they come across online or in their daily lives. An automated fact-checking system consists of several sub-tasks that, when combined, can predict if a claim is truthful BIBREF7. Document retrieval aims to gather relevant articles regarding the claim from a variety of sources. Stance detection aims to determine the position of each article with respect to the claim. Reputation assessment aims to determine the trustworthiness of each article by analyzing its linguistics and source. Claim verification aims to combine stance and reputation information to determine the truthfulness of the claim.
In this paper, we focus on stance detection; given a proposed claim and article, predict if the article agrees, disagrees, has no stance, or is unrelated to the claim. Within the natural language processing (NLP) community, research in stance detection has been catalyzed by the organization of competitions BIBREF8, BIBREF9, BIBREF10 and the collection of benchmark datasets BIBREF11, BIBREF12, BIBREF13. Prominent methods addressing stance detection largely differ in terms of their feature representation (e.g., n-grams, TF-IDF, word embeddings, etc.) and algorithms (e.g., decision trees, multi-layer perceptions, LSTM networks, etc.); retrospectives on recent challenges BIBREF8, BIBREF9, BIBREF14 provide a comprehensive overview of NLP methods in stance detection. While results have been promising, recent developments in NLP hold the potential for significant improvement. Whereas pre-trained word embeddings such as word2vec BIBREF15 and GloVe BIBREF16 encode language into shallow numerical representations for input to machine learning models, deep bidirectional transformer language models BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21 train on large, unlabelled datasets to learn deeper hierarchical representations of language. The result has been a significant improvement on multi-task NLP benchmarks BIBREF22, akin to an "ImageNet moment" for the field.
Motivated by recent advances in NLP and the potential of this technology to meaningfully impact society by addressing the United Nation's Sustainable Development Goals of "Quality Education" and "Peace, Justice, and Strong Institutions", we explore the notion of harnessing large-scale deep bidirectional transform language models for achieving state-of-the-art stance detection. Our major contributions are: (1) constructing a large-scale language model for stance detection by performing transfer learning on a RoBERTa deep bidirectional transformer language model by taking advantage of bidirectional cross-attention between claim-article pairs via pair encoding with self-attention, and (2) state-of-the-art results on the Fake News Challenge Stage 1 (FNC-I) benchmark.
Methodology
The RoBERTa (Robustly Optimized BERT Approach) model, released in July 2019 by Liu et al. BIBREF23, is an open-source language model that achieves state-of-the-art results on benchmark NLP multi-task General Language Understanding Evaluation (GLUE) benchmark BIBREF22. RoBERTa is built upon the BERT (Bidirectional Encoder Representations from Transformers) model, released by Devlin et al. in October 2018 BIBREF19. RoBERTa and BERT achieve high performance by pretraining a transformer model, initially proposed by Vaswani et al. BIBREF17, in a bidirectional manner on a very large corpus of unlabelled text, and fine-tuning the model on a relatively small amount task-specific labelled data. These models are well-suited for use in stance detection as the pretrained model can be leveraged to perform transfer learning on the target task. Using deep bidirectional transformer language models, RoBERTa and BERT have the ability to gain a deeper understanding of language and context when compared to earlier unidirectional transformer architectures BIBREF19. In addition, RoBERTa demonstrates great results on sentence-pair classification tasks of GLUE, such as Multi-Genre Natural Language Inference BIBREF24 and Question Natural Language Inference BIBREF25, BIBREF22, tasks very similar in nature to the claim-article classification of stance detection. Following RoBERTa's method of fine-tuning on GLUE tasks, we include both claim and article, separated by a special token, in each example during training and inference.
Experiments and Analysis ::: Dataset
To investigate the task of stance detection in the context of fake news detection, we use data released for the Fake News Challenge, Stage 1 (FNC-I). The challenge was organized by Pomerleau and Rao in 2017, with the goal of estimating the stance of an article with respect to a claim. Data is derived from the Emergent dataset BIBREF11, sourced from the Emergent Project, a real-time rumour tracker created by the Tow Center for Digital Journalism at Columbia University. The stance takes one of four labels: Agree if the article agrees with the claim, Disagree if the article disagrees with the claim, Discuss if the article is related to the claim, but the author takes no position on the subject, and Unrelated if the content of the article is unrelated to the claim. There are approximately 50k claim-article pairs in the training set and 25k pairs in the test set; Table TABREF7 summarizes the data distribution.
Experiments and Analysis ::: Metrics
To evaluate the performance of our method, we report standard accuracy as well as weighted accuracy, suggested by the organizers of the Fake News Challenge, as it provides a more objective metric for comparison given the class imbalance in the dataset. The weighted accuracy, $Acc_w$, is expressed as:
where $Acc_{r, u}$ is the binary accuracy across related {agree, disagree, discuss} and unrelated article-claim pairs, and $Acc_{a, d, d}$ is the accuracy for pairs in related classes only.
Experiments and Analysis ::: Model
We construct our large-scale language model via transfer learning on a pretrained RoBERTaBASE deep transformer model, consisting of 12-layers of 768-hidden units, each with 12 attention heads, totalling 125M parameters. We leverage the Transformers library by Hugging Face for implementation BIBREF26. To perform transfer learning, we train for three epochs and follow hyperparameter recommendations by Liu et al. BIBREF23 for fine-tuning on GLUE tasks, namely, a learning rate of 2e-5 and weight decay of 0.1. We train on one NVIDIA 1080Ti GPU with a batch size of 8.
Prior to training, the dataset is pre-processed by initializing each example with a start token to signify the beginning of a sequence, followed by the claim, two separator tokens, the article and an additional separator token. The sequence is then tokenized by RoBERTa's byte-level byte-pair-encoding and trimmed or padded to fit a maximum sequence length of 512. We explore the effects of claim-article pair sequence length and maximum sequence length on classification accuracy in the Appendix.
Experiments and Analysis ::: Results & Discussion
Results of our proposed method, the top three methods in the original Fake News Challenge, and the best-performing methods since the challenge's conclusion on the FNC-I test set are displayed in Table TABREF12. A confusion matrix for our method is presented in the Appendix. To the best of our knowledge, our method achieves state-of-the-art results in weighted-accuracy and standard accuracy on the dataset. Notably, since the conclusion of the Fake News Challenge in 2017, the weighted-accuracy error-rate has decreased by 8%, signifying improved performance of NLP models and innovations in the domain of stance detection, as well as a continued interest in combating the spread of disinformation.
Ethical Considerations
Implementation and potential end-users: The implementation of our stance detection model into a real-world system is predicated on the development of solutions to the document retrieval, reputation assessment and claim verification elements of an automated fact-checking system. While this is an active field of research, it is imperative to note that the reputation assessment sub-task is difficult, as the trustworthiness of an individual or media source may be interpreted differently by different individuals due to personal bias. Provided these elements can be developed, the first intended end-users of an automated fact-checking system should be journalists and fact-checkers. Validation of the system through the lens of experts of the fact-checking process is something that the system's performance on benchmark datasets cannot provide. The implementation of such a system into the daily workflow of these individuals is likely a field of research onto itself. Ultimately, the development of a simple user interface for the general public, such as a browser plug-in, is the goal of this system, assisting individuals to stay informed citizens.
Limitations: The model proposed in this work is limited by the fact that it was trained solely on claims and articles in English, from western-focused media outlets. Further work is necessary to extend this work to other languages, where differences in writing style and cultural norms and nuances may lead to differences in performance. In addition, this model is not designed to deal with satire, where the stance of an article with respect to a claim may appear on the surface to be one way, but the underlying intention of its author is to exploit humor to demonstrate an opposing viewpoint.
Risks and potential unintended negative outcomes: A major risk of a stance detection model or automated fact-checking system is the codification of unintended biases into the model through biased training data. In the field of NLP, gender and racial biases have been reported in word embeddings BIBREF35, BIBREF36 and captioning models BIBREF37; the extent to which such social biases are encoded in recently developed language models is only beginning to be studied BIBREF38, BIBREF39. A secondary risk to the roll-out of these systems for adversarial attacks. Early work by Hsieh et al. to investigate the robustness of self-attentive architectures has demonstrated that adversarial examples that could mislead neural language models but not humans are capable of being developed for sentiment analysis, entailment and machine translation BIBREF40. In addition, the development of such a system may be interpreted by some as to provide a definitive answer with respect to the truthfulness of a claim, rather than a predictive estimate of its veracity. A potential unintended negative outcome of this work is for people to take the outputs of an automated fact-checking system as the definitive truth, without using their own judgement, or for malicious actors to selectively promote claims that may be misclassified by the model but adhere to their own agenda.
Conclusions
We have presented a state-of-the-art large-scale language model for stance detection based upon a RoBERTa deep bidirectional transformer. Our promising results motivate efforts to develop additional sub-components of a fully automated fact-checking system such that AI can effectively be harnessed to combat disinformation and allow citizens and democratic institutions to thrive.
Claim-Article Pair Sequence Length
Table TABREF13 presents the results of the RoBERTa model on the FNC-I test set, based on the length of claim-article pair. The model has a maximum sequence length of 512 tokens, so any examples longer than this are trimmed. We find that the model performs best for examples that utilize the full capacity of the input sequence (385 to 512 tokens). Very short sequences (<129 tokens) provide the least amount of information to the model, and the model performs poorly. Long sequences (>512 tokens) have some of their context removed from their input, and these examples also perform relatively poor.
Maximum Sequence Length
Table TABREF14 presents the results of RoBERTa models of varying maximum sequence lengths on the FNC-I test set. We find an increase in accuracy with a longer maximum sequence length, as more context is provided to the model. We cannot increase the length of the input sequence beyond 512 tokens without training the RoBERTa model from scratch, which is not feasible for us.
Confusion Matrices
Figures FIGREF15 and FIGREF15 present confusion matrices for the previous best method and our proposed method on the FNC-I test set. | To the best of our knowledge, our method achieves state-of-the-art results in weighted-accuracy and standard accuracy on the dataset |
2555ca85ff6b56bd09c3919aa6b277eb7a4d4631 | 2555ca85ff6b56bd09c3919aa6b277eb7a4d4631_0 | Q: Which datasets are used for evaluation?
Text: Introduction
Semantic composition plays an important role in sentiment analysis of phrases and sentences. This includes detecting the scope and impact of negation in reversing a sentiment's polarity, as well as quantifying the influence of modifiers, such as degree adverbs and intensifiers, in rescaling the sentiment's intensity BIBREF0 .
Recently, a trend emerged for tackling these challenges via deep learning models such as convolutional and recurrent neural networks, as observed e.g. on the SemEval-2016 Task for Sentiment Analysis in Twitter BIBREF1 .
As these models become increasingly predictive, one also needs to make sure that they work as intended, in particular, their decisions should be made as transparent as possible.
Some forms of transparency are readily obtained from the structure of the model, e.g. recursive nets BIBREF2 , where sentiment can be probed at each node of a parsing tree.
Another type of analysis seeks to determine what input features were important for reaching the final top-layer prediction. Recent work in this direction has focused on bringing measures of feature importance to state-of-the-art models such as deep convolutional neural networks for vision BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , or to general deep neural networks for text BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 .
Some of these techniques are based on the model's local gradient information while other methods seek to redistribute the function's value on the input variables, typically by reverse propagation in the neural network graph BIBREF12 , BIBREF5 , BIBREF13 . We refer the reader to BIBREF14 for an overview on methods for understanding and interpreting deep neural network predictions.
BIBREF5 proposed specific propagation rules for neural networks (LRP rules). These rules were shown to produce better explanations than e.g. gradient-based techniques BIBREF15 , and were also successfully transferred to neural networks for text data BIBREF16 .
In this paper, we extend LRP with a rule that handles multiplicative interactions in the LSTM model, a particularly suitable model for modeling long-range interactions in texts such as those occurring in sentiment analysis.
We then apply the extended LRP method to a bi-directional LSTM trained on a five-class sentiment prediction task. It allows us to produce reliable explanations of which words are responsible for attributing sentiment in individual texts, compared to the explanations obtained by using a gradient-based approach.
Methods
Given a trained neural network that models a scalar-valued prediction function $f_c$ (also commonly referred to as a prediction score) for each class $c$ of a classification problem, and given an input vector $x$ , we are interested in computing for each input dimension $d$ of $x$ a relevance score $R_d$ quantifying the relevance of $x_d$ w.r.t to a considered target class of interest $c$ . In others words, we want to analyze which features of $x$ are important for the classifier's decision toward or against a class $c$ .
In order to estimate the relevance of a pool of input space dimensions or variables (e.g. in NLP, when using distributed word embeddings as input, we would like to compute the relevance of a word, and not just of its single vector dimensions), we simply sum up the relevance scores $R_d$ of its constituting dimensions $d$ .
In this described framework, there are two alternative methods to obtain the single input variable's relevance in the first place, which we detail in the following subsections.
Gradient-based Sensitivity Analysis (SA)
The relevances can be obtained by computing squared partial derivatives: $ R_d = \Big (\frac{\partial {f_c}}{\partial x_d}(x) \Big )^2. $
For a neural network classifier, these derivatives can be obtained by standard gradient backpropagation BIBREF17 , and are made available by most neural network toolboxes. We refer to the above definition of relevance as Sensitivity Analysis (SA) BIBREF18 , BIBREF19 . A similar technique was previously used in computer vision by BIBREF3 , and in NLP by BIBREF8 .
Note that if we sum up the relevances of all input space dimensions $d$ , we obtain the quantity $\Vert {\nabla }_{x} \; f_c({x})\Vert {_2^2}$ , thus SA can be interpreted as a decomposition of the squared gradient norm.
Layer-wise Relevance Propagation (LRP)
Another technique to compute relevances was proposed in BIBREF5 as the Layer-wise Relevance Propagation (LRP) algorithm. It is based on a layer-wise relevance conservation principle, and, for a given input $x$ , it redistributes the quantity $f_c(x)$ , starting from the output layer of the network and backpropagating this quantity up to the input layer. The LRP relevance propagation procedure can be described layer-by-layer for each type of layer occurring in a deep convolutional neural network (weighted linear connections following non-linear activation, pooling, normalization), and consists in defining rules for attributing relevance to lower-layer neurons given the relevances of upper-layer neurons. Hereby each intermediate layer neuron gets attributed a relevance score, up to the input layer neurons.
In the case of recurrent neural network architectures such as LSTMs BIBREF20 and GRUs BIBREF21 , there are two types of neural connections involved: many-to-one weighted linear connections, and two-to-one multiplicative interactions. Hence, we restrict our definition of the LRP procedure to these types of connections. Note that, for simplification, we refrain from explicitly introducing a notation for non-linear activation functions; if such an activation is present at a neuron, we always take into account the activated lower-layer neuron's value in the subsequent formulas. In order to compute the input space relevances, we start by setting the relevance of the output layer neuron corresponding to the target class of interest $c$ to the value $f_c(x)$ , and simply ignore the other output layer neurons (or equivalently set their relevance to zero). Then, we compute layer-by-layer a relevance score for each intermediate lower-layer neuron accordingly to one of the subsequent formulas, depending on the type of connection involved.
Weighted Connections. Let $z_j$ be an upper-layer neuron, whose value in the forward pass is computed as $z_j = \sum _{i}z_i \cdot w_{ij} + b_j$ , where $z_i$ are the lower-layer neurons, and $w_{ij}$ , $b_j$ are the connection weights and biases.
Given the relevances $R_j$ of the upper-layer neurons $z_j$ , the goal is to compute the lower-layer relevances $R_i$ of the neurons $z_i$ . (In the particular case of the output layer, we have a single upper-layer neuron $z_j$ , whose relevance is set to its value, more precisely we set $R_j=f_c(x)$ to start the LRP procedure.) The relevance redistribution onto lower-layer neurons is performed in two steps. First, by computing relevance messages $R_{i \leftarrow j}$ going from upper-layer neurons $z_j$ to lower-layer neurons $z_i$ . Then, by summing up incoming messages for each lower-layer neuron $z_i$ to obtain the relevance $z_j$0 . The messages $z_j$1 are computed as a fraction of the relevance $z_j$2 accordingly to the following rule: $z_j$3
where $N$ is the total number of lower-layer neurons to which $z_j$ is connected, $\epsilon $ is a small positive number which serves as a stabilizer (we use $\epsilon =0.001$ in our experiments), and ${\text{s}ign}(z_j)=(1_{z_j \ge 0} - 1_{z_j < 0})$ is defined as the sign of $z_j$ . The relevance $R_i$ is subsequently computed as $R_i = \sum _{j} R_{i \leftarrow j}$ . Moreover, $\delta $ is a multiplicative factor that is either set to 1.0, in which case the total relevance of all neurons in the same layer is conserved, or else it is set to 0.0, which implies that a part of the total relevance is “absorbed” by the biases and that the relevance propagation rule is approximately conservative. Per default we use the last variant with $\delta =0.0$ when we refer to LRP, and denote explicitly by LRP $z_j$0 our results when we use $z_j$1 in our experiments.
Multiplicative Interactions. Another type of connection is a two-way multiplicative interaction between lower-layer neurons. Let $z_j$ be an upper-layer neuron, whose value in the forward pass is computed as the multiplication of the two lower-layer neuron values $z_g$ and $z_s$ , i.e. $z_j = z_g \cdot z_s$ . In such multiplicative interactions, as they occur e.g. in LSTMs and GRUs, there is always one of the two lower-layer neurons that constitutes a gate, and whose value ranges between $[0,1]$ as the output of a sigmoid activation function (or in the particular case of GRUs, can also be equal to one minus a sigmoid activated value), we call it the $gate$ neuron $z_g$ , and refer to the remaining one as the $source$ neuron $z_s$ .
Given such a configuration, and denoting by $R_j$ the relevance of the upper-layer neuron $z_j$ , we propose to redistribute the relevance onto lower-layer neurons in the following way: we set $R_g=0$ and $R_s=R_j$ . The intuition behind this reallocation rule, is that the gate neuron decides already in the forward pass how much of the information contained in the source neuron should be retained to make the overall classification decision. Thereby the value $z_g$ controls how much relevance will be attributed to $z_j$ from upper-layer neurons. Thus, even if our local propagation rule seems to ignore the respective values of $z_g$ and $z_s$ to redistribute the relevance, these are indeed taken into account when computing the value $R_j$ from the relevances of the next upper-layer neurons to which $z_j$ is connected via weighted connections.
Recurrent Model and Data
As a recurrent neural network model we employ a one hidden-layer bi-directional LSTM (bi-LSTM), trained on five-class sentiment prediction of phrases and sentences on the Stanford Sentiment Treebank movie reviews dataset BIBREF2 , as was used in previous work on neural network interpretability BIBREF8 and made available by the authors. This model takes as input a sequence of words $x_1, x_2,..., x_T$ (as well as this sequence in reversed order), where each word is represented by a word embedding of dimension 60, and has a hidden layer size of 60. A thorough model description can be found in the Appendix, and for details on the training we refer to BIBREF8 .
In our experiments, we use as input the 2210 tokenized sentences of the Stanford Sentiment Treebank test set BIBREF2 , preprocessing them by lowercasing as was done in BIBREF8 . On five-class sentiment prediction of full sentences (very negative, negative, neutral, positive, very positive) the model achieves 46.3% accuracy, and for binary classification (positive vs. negative, ignoring neutral sentences) the test accuracy is 82.9%.
Using this trained bi-LSTM, we compare two relevance decomposition methods: sensitivity analysis (SA) and Layer-wise Relevance Propagation (LRP). The former is similar to the “First-Derivative Saliency” used in BIBREF8 , besides that in their work the authors do not aggregate the relevance of single input variables to obtain a word-level relevance value (i.e. they only visualize relevance distributed over word embedding dimensions); moreover they employ the absolute value of partial derivatives (instead of squared partial derivatives as we do) to compute the relevance of single input variables.
In order to enable reproducibility and for encouraging further research, we make our implementation of both relevance decomposition methods available (see also BIBREF22 ).
Results
In this Section, we present qualitative as well as quantitative results we obtained by performing SA and LRP on test set sentences. As an outcome of the relevance decomposition for a chosen target class, we first get for each word embedding $x_t$ in an input sentence, a vector of relevance values. In order to obtain a scalar word-level relevance, we remind that we simply sum up the relevances contained in that vector. Also note that, per definition, the SA relevances are positive while LRP relevances are signed.
Decomposing Sentiment onto Words
In order to illustrate the differences between SA and LRP, we provide in Fig. 1 and 2 heatmaps of exemplary test set sentences. These heatmaps were obtained by mapping positive word-level relevance values to red, and negative relevances to blue. The exemplary sentences belong either to the class “very negative” or to the class “very positive”, and the target class for relevance decomposition is always the true class. On the left of the Figures, we indicate the true sentence class, as well as the bi-LSTM's predicted class, whereby the upper examples are correctly classified while the bottom examples are falsely classified.
From the inspection of the heatmaps, we notice that SA does not clearly distinguish between words speaking for or against the target class. Indeed it sometimes attributes a comparatively high relevance to words expressing a positive appreciation like thrilling (example 5), master (example 6) or must-see (example 11), while the target class is “very negative”; or to the word difficult (example 19) expressing a negative judgment, while the target class is “very positive”. On the contrary, LRP can discern more reliably between words addressing a negative sentiment, such as waste (1), horrible (3), disaster (6), repetitive (9) (highlighted in red), or difficult (19) (highlighted in blue), from words indicating a positive opinion, like funny (2), suspenseful (2), romantic (5), thrilling (5) (highlighted in blue), or worthy (19), entertaining (20) (highlighted in red).
Furthermore, LRP explains well the two sentences that are mistakenly classified as “very positive” and “positive” (examples 11 and 17), by accentuating the negative relevance (blue colored) of terms speaking against the target class, i.e. the class “very negative”, such as must-see list, remember and future, whereas such understanding is not provided by the SA heatmaps. The same holds for the misclassified “very positive” sentence (example 21), where the word fails gets attributed a deep negatively signed relevance (blue colored). A similar limitation of gradient-based relevance visualization for explaining predictions of recurrent models was also observed in previous work BIBREF8 .
Moreover, an interesting property we observe with LRP, is that the sentiment of negation is modulated by the sentiment of the subsequent words in the sentence. Hence, e.g. in the heatmaps for the target class “very negative”, when negators like n't or not are followed by words indicating a negative sentiment like waste (1) or horrible (3), they are marked by a negatively signed relevance (blue colored), while when the subsequent words express a positive impression like worth (12), surprises (14), funny (16) or good (18), they get a positively signed relevance (red colored).
Thereby, the heatmap visualizations provide some insights on how the sentiment of single words is composed by the bi-LSTM model, and indicate that the sentiment attributed to words is not static, but depends on their context in the sentence. Nevertheless, we would like to point out that the explanations delivered by relevance decomposition highly depend on the quality of the underlying classifier, and can only be “as good” as the neural network itself, hence a more carefully tuned model might deliver even better explanations.
Representative Words for a Sentiment
Another qualitative analysis we conduct is dataset-wide, and consists in building a list of the most resp. the least relevant words for a class. To this end, we first perform SA and LRP on all test set sentences for one specific target class, as an example we take the class “very positive”. Secondly, we order all words appearing in the test sentences in decreasing resp. in increasing order of their relevance value, and retrieve in Table 1 the ten most and least relevant words we obtain. From the SA word lists, we observe that the highest SA relevances mainly point to words with a strong semantic meaning, but not necessarily expressing a positive sentiment, see e.g. broken-down, lackadaisical and mournfully, while the lowest SA relevances correspond to stop words. On the contrary, the extremal LRP relevances are more reliable: the highest relevances indicate words expressing a positive sentiment, while the lowest relevances are attributed to words defining a negative sentiment, hence both extremal relevances are related in a meaningful way to the target class of interest, i.e. the class “very positive”.
Validation of Word Relevance
In order to quantitatively validate the word-level relevances obtained with SA and LRP, we perform two word deleting experiments. For these experiments we consider only test set sentences with a length greater or equal to 10 words (this amounts to retain 1849 test sentences), and we delete from each sentence up to 5 words accordingly to their SA resp. LRP relevance value (for deleting a word we simply set its word embedding to zero in the input sentence representation), and re-predict via the bi-LSTM the sentiment of the sentence with “missing” words, to track the impact of these deletions on the classifier's decision. The idea behind this experiment is that the relevance decomposition method that most pertinently reveals words that are important to the classifier's decision, will impact the most this decision when deleting words accordingly to their relevance value. Prior to the deletions, we first compute the SA resp. LRP word-level relevances on the original sentences (with no word deleted), using the true sentence sentiment as target class for the relevance decomposition. Then, we conduct two types of deletions. On initially correctly classified sentences we delete words in decreasing order of their relevance value, and on initially falsely classified sentences we delete words in increasing order of their relevance. We additionally perform a random word deletion as an uninformative variant for comparison. Our results in terms of tracking the classification accuracy over the number of word deletions per sentence are reported in Fig. 3 . These results show that, in both considered cases, deleting words in decreasing or increasing order of their LRP relevance has the most pertinent effect, suggesting that this relevance decomposition method is the most appropriate for detecting words speaking for or against a classifier's decision. While the LRP variant with relevance conservation LRP $_{cons}$ performs almost as good as standard LRP, the latter yields slightly superior results and thus should be preferred. Finally, when deleting words in increasing order of their relevance value starting with initially falsely classified sentences (Fig. 3 right), we observe that SA performs even worse than random deletion. This indicates that the lowest SA relevances point essentially to words that have no influence on the classifier's decision at all, rather that signalizing words that are “inhibiting” it's decision and speaking against the true class, as LRP is indeed able to identify. Similar conclusions were drawn when comparing SA and LRP on a convolutional network for document classification BIBREF9 .
Relevance Distribution over Sentence Length
To get an idea of which words over the sentence length get attributed the most relevance, we compute a word relevance statistic by performing SA and LRP on all test sentences having a length greater or equal to 19 words (this amounts to 50.0% of the test set). Then, we divide each sentence length into 10 equal intervals, and sum up the word relevances in each interval (when a word is not entirely in an interval, the relevance portion falling into that interval is considered). For LRP, we use the absolute value of the word-level relevance values (to avoid that negative relevances cancel out positive relevances). Finally, to get a distribution, we normalize the results to sum up to one. We compute this statistic by considering either the total word relevance obtained via the bi-LSTM model, or by considering only the part of the relevance that comes from one of the two unidirectional model constituents, i.e. the relevance contributed by the LSTM which takes as input the sentence words in their original order (we call it left encoder), or the one contributed by the LSTM which takes as input the sentence words in reversed order (we call it right encoder). The resulting distributions, for different relevance target classes, are reported in Fig. 4 . Interestingly, the relevance distributions are not symmetric w.r.t. to the sentence middle, and the major part of the relevance is attributed to the second half of the sentences, except for the target class “neutral”, where the most relevance is attributed to the last computational time steps of the left or the right encoder, resulting in an almost symmetric distribution of the total relevance for that class.
This can maybe be explained by the fact that, at least for longer movie reviews, strong judgments on the movie's quality tend to appear at the end of the sentences, while the beginning of the sentences serves as an introduction to the review's topic, describing e.g. the movie's subject or genre. Another particularity of the relevance distribution we notice, is that the relevances of the left encoder tend to be more smooth than those of the right encoder, which is a surprising result, as one might expect that both unidirectional model constituents behave similarly, and that there is no mechanism in the model to make a distinction between the text read in original and in reversed order.
Conclusion
In this work we have introduced a simple yet effective strategy for extending the LRP procedure to recurrent architectures, such as LSTMs, by proposing a rule to backpropagate the relevance through multiplicative interactions. We applied the extended LRP version to a bi-directional LSTM model for the sentiment prediction of sentences, demonstrating that the resulting word relevances trustworthy reveal words supporting the classifier's decision for or against a specific class, and perform better than those obtained by a gradient-based decomposition.
Our technique helps understanding and verifying the correct behavior of recurrent classifiers, and can detect important patterns in text datasets. Compared to other non-gradient based explanation methods, which rely e.g. on random sampling or on iterative representation occlusion, our technique is deterministic, and can be computed in one pass through the network. Moreover, our method is self-contained, in that it does not require to train an external classifier to deliver the explanations, these are obtained directly via the original classifier.
Future work would include applying the proposed technique to other recurrent architectures such as character-level models or GRUs, as well as to extractive summarization. Besides, our method is not restricted to the NLP domain, and might also be useful to other applications relying on recurrent architectures.
Acknowledgments
We thank Rico Raber for many insightful discussions. This work was partly supported by BMBF, DFG and also Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korea government (No. 2017-0-00451 for KRM).
Appendix
Long-Short Term Memory Network (LSTM) We define in the following the LSTM recurrence equations BIBREF20 , BIBREF23 of the model we used in our experiments: $ \begin{split} i_t &= \texttt {sigm} \;\; \Big ( W_i \; h_{t-1} + U_i \; x_t + b_i \Big ) \\ f_t &= \texttt {sigm} \; \Big ( W_f \; h_{t-1} + U_f \; x_t + b_f \Big ) \\ o_t &= \texttt {sigm} \; \Big ( W_o \; h_{t-1} + U_o \; x_t + b_o \Big ) \\ g_t &= \texttt {tanh} \; \Big ( W_g \; h_{t-1} + U_g \; x_t + b_g \Big ) \\ c_t &= f_t \odot c_{t-1} \; + \; i_t \odot g_t \\ h_t &= o_t \odot \texttt {tanh} (c_t) \end{split} $
Here above the activation functions $\texttt {sigm}$ and $\texttt {tanh}$ are applied element-wise, and $\odot $ is an element-wise multiplication.
As an input, the LSTM gets fed with a sequence of vectors $x = (x_1, x_2,..., x_T)$ representing the word embeddings of the input sentence's words. The matrices $W$ 's, $U$ 's, and vectors $b$ 's are connection weights and biases, and the initial states $h_0$ and $c_0$ are set to zero.
The last hidden state $h_T$ is eventually attached to a fully-connected linear layer yielding a prediction score vector $f(x)$ , with one entry ${f_c}(x)$ per class, which is used for sentiment prediction.
Bi-directional LSTM The bi-directional LSTM BIBREF24 we use in the present work, is a concatenation of two separate LSTM models as described above, each of them taking a different sequence of word embeddings as input.
One LSTM takes as input the words in their original order, as they appear in the input sentence. The second LSTM takes as input the same words but in reversed order.
Each of these LSTMs yields a final hidden state vector, say $h^{\rightarrow }_T$ and $h^{\leftarrow }_T$ . The concatenation of these two vectors is eventually fed to a fully-connected linear layer, retrieving one prediction score ${f_c}(x)$ per class. | Stanford Sentiment Treebank |
d028dcef22cdf0e86f62455d083581d025db1955 | d028dcef22cdf0e86f62455d083581d025db1955_0 | Q: What are the strong baselines you have?
Text: Introduction
One of the main challenges in building a Natural Language Understanding (NLU) component for a specific task is the necessary human effort to encode the task's specific knowledge. In traditional NLU components, this was done by creating hand-written rules. In today's state-of-the-art NLU components, significant amounts of human effort have to be used for collecting the training data. For example, when building an NLU component for airplane travel information, there are a lot of possibilities to express the situation that someone wants to book a flight from New York to Pittsburgh. In order to build a system, we need to have seen many of them in the training data. Although more and more data has been collected and datasets with this data have been published BIBREF0 , the datasets often consist of data from another domain, which is needed for a certain NLU component.
An inexpensive and quick way to collect data for a domain is to generate a synthetic dataset where templates are filled with various values. A problem with such synthetic datasets is to encode enough variety of natural language to be able to generalize to unseen utterances during training. To do this, an enormous amount of effort will be needed. In this work, we address this challenge by combining task-specific synthetic data and real data from another domain. The multi-task framework enables us to combine these two knowledge sources and therefore improve natural language understanding.
In this work, the NLU component is based on an attention-based encoder-decoder model BIBREF1 . We evaluate the approach on the commonly used travel information task and used as an out-of-domain task the subtitles of movies and series.
Related Work
There are many of appropriate architectures for end-to-end trainable goal-oriented dialog systems BIBREF1 , BIBREF2 , BIBREF3 with different approaches for the NLU part; however, what they have in common is that they need a huge amount of training data.
Multi-task learning has been performed in many of machine learning applications, e. g., in facial landmark detection an application in the area of vision BIBREF4 .
Multi-task learning for sequence-to-sequence models in Natural Language Processing is described in BIBREF5 , BIBREF6 , BIBREF7 . In BIBREF5 , machine translation was trained together with either syntax parsing or image captioning on a not attention-based encoder-decoder model. The encoder was shared between the tasks. They improved the translation between English and Germany by up to 1.5 BLEU points. In BIBREF6 , the authors used an attention-based encoder-decoder model and were also able to improve on this model machine translation by up to 1.5 BLEU points by combining machine translation with part-of-speech tagging and named entity recognition. In addition, they presented different architectures for multi-task learning, such as sharing in addition to the encoder, the attention layer, or decoder. In BIBREF7 , the authors used multi-task learning to learn to translate 20 individual languages with one system.
Multi-task Learning
In the multi-task learning approach of this work, in-domain synthetic data and out-of-domain real data are jointly trained. In synthetic datasets, there are often missing expressions for situations. However, in larger out-of-domain datasets, there are expressions for similar situations. Through the joint training of the encoding for both tasks, we expect a better natural language understanding in the in-domain task because it can be learned to encode situations independent to their expression in natural language.
Architecture
We use an attention-based encoder-decoder model for multi-task learning. We share between the tasks the embedding layer and the encoder. The remaining components of the attention-based encoder-decoder model - the attention layer and the decoder with its final softmax layer - are not shared. The intuition behind this is, that in our synthetic datasets, there are missing expressions for situations that are in the out-of-domain datasets. With the training of the out-of-domain datasets, we want to learn to encode situations independent to their expression in natural language. For improving encoding, we expect the best results by only sharing the encoder because knowledge from the out-of-domain dataset is transfered to the in-domain dataset.
In BIBREF7 , an attention-based encoder-decoder model that is able to share the weights of layers between tasks is described and its implementation was published. We added to this implementation an option to train instances of the smallest dataset $m$ -times and an option to accumulate gradients and published the additions under the MIT license. The architecture is depicted in Figure 1 .
Training Schedule
In BIBREF6 , only one task in each mini-batch is considered because this is more GPU-efficient given that not all weights are shared between the tasks. Let $n$ be the number of instances that are trained simultaneously on the GPU. The instances of one task are grouped into groups of size $n$ . These groups are randomly shuffled before every epoch during training. However, in our experiments, updating the weights after the training of a group of one task led to perplexity jumps. To avoid these jumps, we accumulate the gradients and update our weights only after $t$ groups. This means that our mini-batch size is $t \cdot n$ . We use the Adam optimization algorithm BIBREF8 for updating the weights.
After the multi-task learning, we fine-tune the model by retraining the model only with the synthetic dataset. For this fine-tuning, we reset all the parameters of the Adam optimization algorithm.
The out-of-domain datasets have a huge size in comparison to the synthetic datasets. To avoid instances of the synthetic datasets are not considered in the training of the model, instances of the synthetic dataset are trained $m$ -times during one epoch.
Data
For the out-of-domain task, we use two subsets of the English OpenSubtitle corpus BIBREF9 in this work. The OpenSubtitle corpus consists of the subtitles of movies and series. The first subset was published by BIBREF10 and consists of all the sentence pairs from the OpenSubtitle corpus that have the following properties: the first sentence ends with a question mark; the second sentence follows directly the first sentence and has no question mark; and the time difference between the sentences is less than 20 seconds. In total, the subset has more than 14 million sentence pairs for training and 10 000 sentence pairs for validation. In the following sections, this dataset is called OpenSubtitles QA. We created the second subset in a similar manner as the subtle dataset BIBREF11 was created. It consists of sentence pairs with the following properties: the second sentence follows directly the first sentence; both sentences end with a point, exclamation point, or question mark; and between the two sentences, there is at maximum a pause of 1 second. In the following sections, this dataset is called OpenSubtitles dialog. To be able to train the attention-based encoder-decoder model in a reasonable time, we only used the first 14 million sentence pairs for training. The next 10 000 sentence pairs were used for validation. For both datasets we used the default English word tokenizer of the Natural Language Toolkit (NLTK) BIBREF12 for tokenization. As there is another tokenization approach in the OpenSubtitle corpus in comparison to the tokenizer in the NLTK, we had to merge the tokens 's, 're, 't, 'll, and 've to their previous token in the OpenSubtitles dialog dataset to improve the compatibility with the tokenization of the NLTK.
We generated two synthetic datasets. These two datasets are based on a subset of the ATIS (Airline Travel Information Systems) dataset BIBREF13 that was published by BIBREF14 and called ATIS in the following sections. In the ATIS corpus, every user utterance has one or multiple intents and every word of a user utterance is tagged in the IOB format. The format is depicted in Figure 2 . However, the out-of-domain dataset is no intent and slot filling task. It is a sequence-to-sequence task. To train both tasks together, we converted the intent and slot filling task to a sequence-to-sequence task. The conversion is also depicted in Figure 2 .
In the ATIS dataset, there are 4479 tagged user utterances for training, 500 for validation and 893 for testing.
The smaller synthetic dataset consists of 212 templates that form 17 679 source target sequence pairs after filling the template placeholders and is called ATIS small in the following sections and the larger dataset consists of 832 templates that form 70 040 source target sequence pairs and is called ATIS medium in the following sections. The ATIS small dataset was generated by extracting all the sequences that have a new parameter in the target sequence that was not included in any target sequence extracted before. Extracting all the sequences that have a parameter combination that was not included in any target sequence extracted before, forms the ATIS medium dataset. In the extracted sequences, the parameter values were replaced by placeholders to become templates. For the placeholders, all the possible values were inserted. When one template produced more than 1000 source target sequence pairs, then, instead of the Cartesian product, the random permutation algorithm BIBREF15 was used, which produces as many source target sequence pairs as the values of the placeholder with the greatest number of values. For both datasets, we alphabetically sorted the parameters to ease the learning process.
Evaluation
We evaluate the quality of the predicted intent and parameter values with the metric F1-score. For averaging the F1-score over the target sequences, we use micro-averaging. This means that we count the true positives, false positives, and false negatives for all the sequences and calculate the recall and precision for the F1-score with these. In addition, we provide the metric intent accuracy. For the intent accuracy, the number of completely correct predicted intents (the intents of the reference and hypothesis must be the same) is divided by the number of target sequences.
System Setup
We optimized our single-task baseline to get a strong baseline in order to exclude better results in multi-task learning in comparison to single-task learning only because of these two following points: network parameters suit the multi-task learning approach better and a better randomness while training in the multi-task learning. To exclude the first point, we tested different hyperparameters for the single-task baseline. We tested all the combinations of the following hyperparameter values: 256, 512, or 1024 as the sizes for the hidden states of the LSTMs, 256, 512, or 1024 as word embedding sizes, and a dropout of 30 %, 40 %, or 50 %. We used subword units generated by byte-pair encoding (BPE) BIBREF16 as inputs for our model. To avoid bad subword generation for the synthetic datasets, in addition to the training dataset, we considered the validation and test dataset for the generating of the BPE merge operations list. We trained the configurations for 14 epochs and trained every configuration three times. We chose the training with the best quality with regard to the validation F1-score to exclude disadvantages of a bad randomness. We got the best quality with regard to the F1-score with 256 as the size of the hidden states of the LSTMs, 1024 as word embedding size, and a dropout of 30 %. For the batch size, we used 64.
We optimized our single-task model trained on real data in the same manner as the single-task baseline, except that we used 64 epochs.
In the multi-task learning approach, we trained both tasks for 10 epochs. We use for $m$ (the instance multiplicator of the synthetic dataset) such a value that the synthetic dataset has nearly the size of one-tenth of the out-of-domain dataset. Because of long training times, we were not able to optimize the hyperparameters. We chose 256 as the size of the hidden states of the LSTMs, 1024 as word embedding size, and 50 % for the dropout and were not able to run multiple runs. For $n$ (the number of instances that are trained simultaneously on the GPU), we chose 128 and for $t$ (number of groups after that the model weights are updated) we chose 11. Other hyperparameters in the single-task and multi-task experiments were not changed from the default values of the published implementation.
We used this best epoch with regard to the validation F1-score to fine-tune our model. To exclude only better results because of good random initialization, we made three runs, used the epoch with the best validation F1-score from every run, and chose the run with the worst validation F1-score for evaluation. We used 64 as the batch size, 50 % as dropout, and 14 as the number of epochs.
We used subword units generated by BPE for all approaches and used 40 000 as the limit for the number of BPE merging operations as well as the vocabulary size.
Results
In Figure 3 , the test F1-score of the training run of the configuration with the best validation F1-score is depicted with respect to the epoch for the ATIS small dataset and in Figure 4 for the ATIS medium dataset. The best result is achieved after epoch 11 or 7, respectively. There is no trend for a further improvement after epoch 14. The test F1-score of the best epoch according to the validation F1-score is depicted in the Tables 1 and 2 , respectively.
In Table 1 , the validation and test F1-scores and intent accuracies with regard to the best validation F1-score of the multi-task learning approach with the ATIS small dataset is depicted. The test F1-score could be improved 2.32 percentage points with multi-task learning with the OpenSubtitles QA dataset and 4.22 percentage points to 84.98 % with the OpenSubtitles dialog dataset. The test intent accuracies could be improved with multi-task learning 5.60 and 6.16 percentage points, respectively. For both out-of-domain datasets, fine-tuning did not improve the F1-score.
In Table 2 , the validation and test F1-scores and intent accuracies with regard to the best validation F1-score of the multi-task learning approach with the ATIS medium dataset is depicted. The test F1-score could be improved 0.52 percentage points with multi-task learning with the OpenSubtitles QA dataset and 0.30 percentage points with the OpenSubtitles dialog dataset. The test intent accuracies could be improved with multi-task learning by 0.34 and 1.79 percentage points, respectively. These improvements are not big, but the F1-score of the multi-task learning with the OpenSubtitles QA dataset is only 0.13 percentage points below the results of the model trained on the complete real training data of the ATIS dataset.
Conclusions and Further Work
In this work, we evaluated whether the training of a synthetic dataset alongside with an out-of-domain dataset can improve the quality in comparison to train only with the synthetic dataset. Although we optimized the model of the single-task learning baseline and not the model of the multi-task learning approach, we were able to increase the F1-score 4.22 percentage points to 84.98 % for the smaller synthetic dataset (ATIS small). For the bigger dataset (ATIS medium), we could not significantly improve the results, but the results are already in the near of the results of the model trained on the real data. To improve the quality of dialog systems for these exist only strong under-resourced synthetic datasets is especially helpful because the better a system is, the more it encourages users to use it. This is often an inexpensive way to collect data to log real user usage. However, by collecting real user data, it is necessary to account privacy laws.
The problem with the OpenSubtitles QA dataset is, that the form question as source sequence and answer as target sequence differs from the form of the ATIS datasets. The problem with the OpenSubtitles dialog dataset is that it is very noisy. Responses do not often refer to the previous utterance. In future work, it would be interesting to test other datasets or a combination of datasets whose form is better fitting or are less noisy, respectively.
We expect a further improvement of the multi-task learning approach by optimizing the parameters of our model in the multi-task learning approach. However, this is very computation time intensive because the out-of-domain datasets have 14 million instances, and therefore, we leave it open for future work.
We evaluated the multi-task learning approach with the attention-based encoder-decoder model, but we also expect an improvement by the multi-task learning approach for other architectures, such as the transformer model BIBREF17 , which could be researched in future work.
Acknowledgement
This work has been conducted in the SecondHands project which has received funding from the European Union’s Horizon 2020 Research and Innovation programme (call:H2020- ICT-2014-1, RIA) under grant agreement No 643950. | optimize single task with no synthetic data |
593e307d9a9d7361eba49484099c7a8147d3dade | 593e307d9a9d7361eba49484099c7a8147d3dade_0 | Q: What are causal attribution networks?
Text: Causal attribution datasets
In this work we compare causal attribution networks derived from three datasets. A causal attribution dataset is a collection of text pairs that reflect cause-effect relationships proposed by humans (for example, “virus causes sickness”). These written statements identify the nodes of the network (see also our graph fusion algorithm for dealing with semantically equivalent statements) while cause-effect relationships form the directed edges (“virus” $\rightarrow $ “sickness”) of the causal attribution network.
We collected causal attribution networks from three sources of data: English Wikidata BIBREF11 , English ConceptNet BIBREF10 , and IPRnet BIBREF12 . Wikidata and ConceptNet, are large knowledge graphs that contain semantic links denoting many types of interactions, one of which is causal attribution, while IPRnet comes from an Amazon Mechanical Turk study in which crowd workers were prompted to provide causal relationships. Wikidata relations were gathered by running four search queries on the Wikidata API (query.wikidata.org). These queries searched for relations with the properties: "has immediate cause", "has effect", "has cause", or "immediate cause of". The first and third searches reverse the order of the cause and effect which we reversed back. We discarded any Wikidata relations where the cause or effect were blank, as well as one ambiguous relation where the cause was "NaN". ConceptNet attributions were gathered by searching the English ConceptNet version 5.6.0 assertions for “/r/Causes/” relations. Lastly, IPRnet was developed in BIBREF12 which we use directly.
The three networks together contain $23\,239$ causal links and $19\,096$ unique terms, of which there are $4\,265$ and $14\,831$ unique causes and effects, respectively.
Text processing and analysis
Each node in our causal attribution networks consists of an English sentence, a short written description of an associated cause and/or effect. Text analysis of these sentences was performed using CoreNLP v3.9.2 and NLTK v3.2.2 BIBREF16 , BIBREF17 . We computed Part-of-Speech (POS) tags and identified (but did not remove) stop words for these sentences. We used the standard Brown corpus as a text baseline for comparison. Text processing procedures such as lemmatization or removal of casing were not performed in order to retain information for subsequent operations. A small number of ConceptNet sentences contained `/n' and `/v' codes within the text denoting parts-of-speech tags; we removed these before applying our own POS tagger. POS tagging of the causal sentences and the baseline dataset was performed using CoreNLP by tokenizing each input using the Penn Treebank tokenizer then applying the Stanford POS tagger. This tagger uses Penn Treebank tags. We aggregated these 36 tags into NLTK's universal tagset which consists of a simpler set of 12 tags including NOUN, VERB, ADJ, and more. To simplify presentation, we chose to further collect all non-verb, non-noun, and non-adjective tags into an “Other” tag. Stop words were identified using NLTK's English stop words corpus.
Word vectors, or embeddings, are modern computational linguistics tools that project words into a learned vector space where context-based semantics of text are preserved, enabling computational understanding of text via mathematical operations on the corresponding vectors BIBREF18 . Many different procedures exist for learning these vector spaces from text corpora BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 . Document embeddings, or “sentence vectors,” extend word vectors, representing more complex multi-word expressions in a vector space of their own BIBREF22 . Given two nodes $i$ and $j$ with corresponding sentences $s_i$ and $s_j$ and sentence vector representations $\mathbf {v}_i$ and $\mathbf {v}_j$ , respectively, the vector cosine similarity $\frac{ \mathbf {v}_i \cdot \mathbf {v}_j }{ \Vert \mathbf {v}_i \Vert \Vert \mathbf {v}_j \Vert }$ is a useful metric for estimating the semantic association between the nodes. High vector similarity implies that textual pairs are approximately semantically equivalent and sentence vectors can better compare nodes at a semantic level than more basic approaches such as measuring shared words or n-grams.
We computed sentence vectors using TensorFlow BIBREF23 v1.8.0 using the Universal Sentence Encoder v2, a recently developed embedding model that maps English text into a 512-dimensional vector space and achieves competitive performance at a number of natural language tasks BIBREF24 . This model was pretrained on a variety of text corpora BIBREF24 . The Universal Sentence Encoder was tested on several baseline NLP tasks including sentiment classification and semantic textual similarity, for each of which it performs with the highest accuracy. Given the higher performance of the Universal Sentence Encoder with respect to textual similarity tasks, we elected to utilize it instead of other sentence encoding models including the character level CNN architecture used in Google's billion word baseline BIBREF25 , and weighted averaging of word vector representations BIBREF26 .
Graph fusion
Graph fusion takes two graphs $G_1=(V_1, E_1)$ and $G_2=(V_2,E_2)$ and computes a fused graph $G = (V,E)$ by identifying and combining semantically equivalent nodes (according to some measure of similarity) within and between $V_1$ and $V_2$ . Graph fusion is closely related to graph alignment and (inexact) graph matching BIBREF27 , although fusion assumes the need to identify node equivalents both within and between the networks being fused, unlike alignment and matching which generally focus on uncovering relations between $V_1$ and $V_2$ . Graph fusion is particularly important when a canonical representation for nodes, such as an ID number, is lacking, and thus equivalent nodes may appear and need to be combined. This is exactly the case in this work, where each node is a written description of a concept, and the same concept can be equivalently described in many different ways.
Here we describe Network FUsion with SEmantic Similarity (NetFUSES). This algorithm computes the fused graph $G$ given a node similarity function $f: V \times V \rightarrow \lbrace 0,1\rbrace $ . This $f$ should encode the semantic closeness between nodes $u$ and $v$ , with $f(u,v) = 1$ for semantically equivalent $u$ and $v$ and $f(u,v) = 0$ for semantically non-equivalent $u$ and $f: V \times V \rightarrow \lbrace 0,1\rbrace $0 . We assume $f: V \times V \rightarrow \lbrace 0,1\rbrace $1 and $f: V \times V \rightarrow \lbrace 0,1\rbrace $2 .
To fuse $G_1$ and $G_2$ into $G$ , first compute $F = \lbrace f(u,v) \mid u,v \in V_1 \cup V_2 \rbrace $ . One can interpret $F$ as (the edges of) a fusion indicator graph defined over the combined node sets of $G_1$ and $G_2$ . Each connected component in $F$ then corresponds to a subset of $V_1 \cup V_2$ that should be combined into a single node in $V$ . (One can also take a stricter view and combine nodes corresponding to completely dense connected components of $G_2$0 instead of any connected components, but this strictness can also be incorporated by making $G_2$1 more strict.) Let $G_2$2 indicate the connected component of $G_2$3 containing node $G_2$4 . Abusing notation, one can also consider $G_2$5 as representing the node in $G_2$6 that the unfused node $G_2$7 maps onto. Lastly, we define the edges $G_2$8 of the fused graph based on the neighborhoods of nodes in $G_2$9 . The neighborhood $G$0 of each node $G$1 in the fused graph is the union of the neighborhoods of the nodes connected to $G$2 in $G$3 : for any node $G$4 , let $G$5 and $G$6 Then the neighborhood $G$7 defines the edges incident on $G$8 in the fused graph and $G$9 may now be computed. Notice by this procedure that if an edge already exists in $F = \lbrace f(u,v) \mid u,v \in V_1 \cup V_2 \rbrace $0 and/or $F = \lbrace f(u,v) \mid u,v \in V_1 \cup V_2 \rbrace $1 between two nodes $F = \lbrace f(u,v) \mid u,v \in V_1 \cup V_2 \rbrace $2 and $F = \lbrace f(u,v) \mid u,v \in V_1 \cup V_2 \rbrace $3 that share a connected component in $F = \lbrace f(u,v) \mid u,v \in V_1 \cup V_2 \rbrace $4 , then a self-loop is created in $F = \lbrace f(u,v) \mid u,v \in V_1 \cup V_2 \rbrace $5 when $F = \lbrace f(u,v) \mid u,v \in V_1 \cup V_2 \rbrace $6 and $F = \lbrace f(u,v) \mid u,v \in V_1 \cup V_2 \rbrace $7 are combined. For our purposes these self-loops are meaningful, but otherwise they can be discarded.
Semantic similarity In this work, each node $i$ is represented only by a short written sentence $s_i$ , and two sentences $s_i \ne s_j$ may in fact be different descriptions of the same underlying concept. Hence the need for NetFUSES. To relate two sentences $s_i$ and $s_j$ semantically, we rely upon recent advances in natural language processing that can embed words and multiword expressions into a semantically-meaningful vector space (see Sec. "Discussion" ). Let $\mathbf {v}_i$ be the “sentence vector” corresponding to $s_i$ . Then define $f(i,j) = 1$ if $\frac{ \mathbf {v}_i \cdot \mathbf {v}_j }{ \Vert \mathbf {v}_i \Vert \Vert \mathbf {v}_j \Vert } > t$ and zero otherwise, for some parameter $t$ . In other words, we consider nodes $s_i$0 and $s_i$1 to be semantically equivalent when the cosine similarity between their vectors exceeds a given threshold $s_i$2 . Our procedure in the main text determined $s_i$3 as an approach threshold.
Capture-recapture
Capture-recapture (also known as mark-and-recapture and recapture sampling) methods are statistical techniques for estimating the size of an unobserved population by examining the intersection of two or more independent samples of that population BIBREF28 , BIBREF29 . For example, biologists wishing to understand how many individuals of a species exist in an environment may capture $n_1$ individuals, tag and release them, then later gather another sample by capturing $n_2$ individuals. The more individuals in the second sample that carry tags, the more likely it is that the overall population $N$ is small; conversely, if the overlap in the samples is small, then it is likely that $N$ is large. Capture-recapture is commonly used by biologists and ecologists for exactly this purpose, but it has been applied to many other problems as well, including estimating the number of software faults in a large codebase BIBREF28 and estimating the number of relevant academic articles covering a specific topic of interest BIBREF30 .
The simplest estimator for the unknown population size $N$ is the Lincoln-Petersen estimator. Assuming the samples generated are unbiased, meaning that each member of the population is equally likely to be captured, then the proportion of captured individuals in the second sample who were tagged should be approximately equal to the overall capture probability for the first sample, $n_1 / N \approx n_{12} / n_2$ . Solving for $N$ gives the intuitive Lincoln-Petersen estimator $\hat{N} = {n_1 n_2}/{ n_{12}}$ , for $n_{12} > 0$ . While a good starting point, this estimator is known to be biased for small samples BIBREF29 , and much work has been performed to determine improved estimators, such as the well-known Chapman estimator BIBREF31 .
In this work we use the recently developed Webster-Kemp estimator BIBREF30 :
$$\hat{N} = \frac{\left(n_1-n_{12}+1\right)\left(n_2-n_{12}+1\right)}{n_{12}} + n_1 + n_2 - n_{12},$$ (Eq. 6)
which assumes (i) that one tried to capture as many items as possible (as opposed to predetermining $n_1$ and $n_2$ and capturing until reaching those numbers) and (ii) the total number of items found $n_1 + n_2 - n_{12} \gg 1$ . Webster and Kemp also derive the variance of this estimator:
$$\sigma ^{2}_{\hat{N}} = \frac{(n_1-n_{12}+1)(n_2-n_{12}+1)(n_1+1)(n_2+1)}{n_{12}^{2}(n_{12}-1)},$$ (Eq. 7)
with $n_{12} > 1$ , allowing us to assess our estimate uncertainty. Equations ( 6 ) and ( 7 ) are approximations when assuming a flat prior on $N$ but are exact when assuming an almost-flat prior on $N$ that slightly favors larger populations $N$ over smaller BIBREF30 .
Results
Here we use network and text analysis tools to compare causal attribution networks (Sec. "Comparing causal networks" ). Crucially, nodes in these networks are defined only by their written descriptions, and multiple written descriptions can represent the same conceptual entity. Thus, to understand how causal attribution networks can be combined, we introduce and analyze a method for fusing networks (Sec. "Fusing causal networks" ) that builds off both the network structure and associated text information and explicitly incorporates conceptual equivalencies. Lastly, in Sec. "Inferring the size of the causal attribution network" we use the degree of overlap in these networks as a means to infer the total size of the one underlying causal attribution network being explored by these data collection efforts, allowing us to better understand the size of collective space of cause-effect relationships held by humans.
Comparing causal networks
We perform a descriptive analysis of the three datasets, comparing and contrasting their features and properties. We focus on two aspects, the network structure and the text information (the written descriptions associated with each node in the network). Understanding these data at these levels can inform efforts to combine different causal attribution networks (Sec. "Fusing causal networks" ).
Table 1 and Fig. 2 summarize network characteristics for the three causal attribution networks. We focus on standard measures of network structure, measuring the sizes, densities, motif structure, and connectedness of the three networks. Both Wikidata and ConceptNet, the two larger networks, are highly disconnected, amounting to collections of small components with low density. In contrast, IPRnet is smaller but comparatively more dense and connected, with higher average degree, fewer disconnected components, and more clustering (Table 1 ). All three networks are degree dissortative, meaning that high-degree nodes are more likely to connect to low-degree nodes. For connectedness and path lengths, we consider both directed and undirected versions of the network allowing us to measure strong and weak connectivity, respectively. All three networks are well connected when ignoring link directionality, but few directed paths exist between disparate nodes in Wikidata and ConceptNet, as shown by the large number of strong connected components and small size of the strong giant components for those networks.
To examine motifs, we focus on feedback loops and feedforward loops, both of which play important roles in causal relationships BIBREF32 , BIBREF33 . The sparse Wikidata network has neither loops, while ConceptNet has 87 feedforward loops and 1 feedback loop (Table 1 ). In contrast, IPRnet has far more loops, 986 feedback and 3541 feedforward loops.
Complementing the statistics shown in Table 1 , Fig. 2 shows the degree distributions ( 2 A), distributions of component sizes ( 2 B), and distributions of two centrality measures ( 2 C). All three networks display a skewed or heavy-tailed degree distribution. We again see that Wikidata and ConceptNet appear similar to one another while IPRnet is quite different, especially in terms of centrality. One difference between ConceptNet and Wikidata visible in 2 A is a mode of nodes with degree $\sim 30$ within ConceptNet that is not present in Wikidata.
Understanding the network structure of each dataset only accounts for part of the information. Each node $i$ in these networks is associated with a sentence $s_i$ , a written word or phrase that describes the cause or effect that $i$ represents. Investigating the textual characteristics of these sentences can then reveal important similarities and differences between the networks.
To study these sentences, we apply standard tools from natural language processing and computational linguistics (see Sec. "Data and Methods" ). In Table 2 and Fig. 3 we present summary statistics including the total text size, average length of sentences, and so forth, across the three networks. We identify several interesting features. One, IPRnet, the smallest and densest network, has the shortest sentences on average, while ConceptNet has the longest sentences (Table 2 and Fig. 3 A). Two, ConceptNet sentences often contain stop words (`the,' `that,' `which,', etc.; see Sec. "Data and Methods" ) which are less likely to carry semantic information (Fig. 3 B). Three, Wikidata contains a large number of capitalized sentences and sentences containing numerical digits. This is likely due to an abundance of proper nouns, names of chemicals, events, and so forth. These textual differences may make it challenging to combine these data into a single causal attribution network.
We next applied a Part-of-Speech (POS) tagger to the sentences (Sec. "Data and Methods" ). POS tags allow us to better understand and compare the grammatical features of causal sentences across the three networks, for example, if one network's text is more heavily focused on nouns while another network's text contains more verbs. Additionally, POS tagging provides insight into the general language of causal attribution and its characteristics. As a baseline for comparison, we also present in Fig. 3 C the POS frequencies for a standard text corpus (Sec. "Data and Methods" ). As causal sentences tend to be short, often incomplete statements, it is plausible for grammatical differences to exist compared with formally written statements as in the baseline corpus. For conciseness, we focus on nouns, verbs, and adjectives (Sec. "Data and Methods" ). Nouns are the most common Part-of-Speech in these data, especially for Wikidata and IPRnet that have a higher proportion of nouns than the baseline corpus (Fig. 3 C). Wikidata and IPRnet have correspondingly lower proportions of verbs than the baseline. These proportions imply that causal attributions contain a higher frequency of objects committing actions than general speech. However, ConceptNet differs, with proportions of nouns and verbs closer to the baseline. The baseline also contains more adjectives than ConceptNet and IPRnet. Overall, shorter, noun-heavy sentences may either help or harm the ability to combine causal attribution networks, depending on their ambiguity relative to longer, typical written statements.
Fusing causal networks
These causal attributions networks are separate efforts to map out the underlying or latent causal attribution network held collectively by humans. It is natural to then ask if these different efforts can be combined in an effective way. Fusing these networks together can provide a single causal attribution network for researchers to study.
At the most basic level, one can fuse these networks together simply by taking their union, defining a single network containing all the unique nodes and edges of the original networks. Unfortunately, nodes in these networks are identified by their sentences, and this graph union assumes that two nodes $i$ and $j$ are equivalent iff $s_i = s_j$ . This is overly restrictive as these sentences serve as descriptions of associated concepts, and we ideally want to combine nodes that represent the same concept even when their written descriptions differ. Indeed, even within a single network it can be necessary to identify and combine nodes in this way. We identify this problem as graph fusion. Graph fusion is a type of record linkage problem and is closely related to graph alignment and (inexact) graph matching BIBREF27 , but unlike those problems, graph fusion assumes the need to identify node equivalencies both within and between the networks being fused.
We introduce a fusion algorithm, NetFUSES (Network FUsion with SEmantic Similarity) that allows us to combine networks using a measure of similarity between nodes (Sec. "Data and Methods" ). Crucially, NetFUSES can handle networks where nodes may need to be combined even within a single network. Here we compare nodes by focusing on the corresponding sentences $s_i$ and $s_j$ of the nodes $i$ and $j$ , respectively, in two networks. We use recent advances in computational linguistics to define a semantic similarity $S(s_i,s_j)$ between $s_i$ and $s_j$ and consider $i$ and $j$ as equivalent when $S(s_i,s_j) \ge t$ for some semantic threshold $s_j$0 . See Sec. "Data and Methods" for details.
To apply NetFUSES with our semantic similarity function (Sec. "Data and Methods" ) requires determining a single parameter, the similarity threshold $t$ . One can identify a value of $t$ using an independent analysis of text, but we argue for a simple indicator of its value given the networks: growth in the number of self-loops as $t$ is varied. If two nodes $i$ and $j$ that are connected before fusion are combined into a single node $u$ by NetFUSES, then the edge $i\rightarrow j$ becomes the self-loop $u \rightarrow u$ . Yet the presence of the original edge $i \rightarrow j$ generally implies that those nodes are not equivalent, and so it is more plausible that combining them is a case of over-fusion than it would have been if $i$ and $t$0 were not connected. Of course, in networks such as the causal attribution networks we study, a self-loop is potentially meaningful, representing a positive feedback where a cause is its own effect. But these self-loops are quite rare (Table 1 ) and we argue that creating additional self-loops via NetFUSES is more likely to be over-fusion than the identification of such feedback. Thus we can study the growth in the number of self-loops as we vary the threshold $t$1 to determine as an approximate value for $t$2 the point at which new self-loops start to form.
Figure 4 identifies a clear value of the similarity threshold $t\approx 0.95$ . We track as a function of threshold the number of nodes, edges, and self-loops of the fusion of Wikidata and ConceptNet, the two largest and most similar networks we study. The number of self-loops remains nearly unchanged until the level of $t = 0.95$ , indicating that as the likely onset point of over-fusion. Further lowering the similarity threshold leads to growth in the number of self-loops, until eventually the number of self-loops begins to decrease as nodes that each have self-loops are themselves combined. Thus, with a clear onset of self-loop creation, we identify $t = 0.95$ to fuse these two networks together.
Inferring the size of the causal attribution network
These three networks represent separate attempts to map out and record the collective causal attribution network held by humans. Of the three, IPRnet is most distinct from the other two, being smaller in size, denser, and generated by a unique experimental protocol. In contrast, Wikidata and ConceptNet networks are more similar in terms of how they were constructed and their overall sizes and densities.
Treating Wikidata and ConceptNet as two independent “draws” from a single underlying network allows us to estimate the total size of this latent network based on their overlap. (We exclude IPRnet as this network is generated using a very different mechanism than the others.) High overlap between these samples implies a smaller total size than low overlap. This estimation technique of comparing overlapping samples is commonly used in wildlife ecology and is known as capture-recapture or mark-and-recapture (see Sec. "Capture-recapture" ). Here we use the Webster-Kemp estimator (Eqs. ( 6 ) and ( 7 )), but given the size of the samples this estimator will be in close agreement with the simpler Lincoln-Petersen estimator.
We first begin with the strictest measure of overlap, exact matching of sentences: node $i$ in one network overlaps with node $j$ in the other network only when $s_i = s_j$ . We then relax this strict assumption by applying NetFUSES as presented in Sec. "Fusing causal networks" .
Wikidata and ConceptNet contain 12 741 and 5 316 nodes, respectively, and the overlap in these sets (when strictly equating sentences) is 208. Substituting these quantities into the Webster-Kemp estimator gives a total number of nodes of the underlying causal attribution network of $\hat{N} = 325\,715.4 \pm 43\,139.2$ ( $\pm $ 95% CI). Comparing $\hat{N}$ to the size of the union of Wikidata and ConceptNet indicates that these two experiments have explored approximately 5.48% $\pm $ 0.726% of causes and effects.
However, this estimate is overly strict in that it assumes any difference in the written descriptions of two nodes means the nodes are different. Yet, written descriptions can easily represent the same conceptual entity in a variety of ways, leading to equivalent nodes that do not have equal written descriptions. Therefore we repeated the above estimation procedure using Wikidata and ConceptNet networks after applying NetFUSES (Sec. "Fusing causal networks" ). NetFUSES incorporates natural language information directly into the semantic similarity, allowing us to incorporate, to some extent, natural language information into our node comparison.
Applying the fusion analysis of Sec. "Fusing causal networks" and combining equivalent nodes within the fused Wikidata and ConceptNet, networks, then determining whether fused nodes contain nodes from both original networks to compute the overlap in the two networks, we obtain a new estimate of the underlying causal attribution network size of $\hat{N} = 293\,819.0 \pm 39\,727.3$ . This estimate is smaller than our previous, stricter estimate, as expected due to the fusion procedure, but within the previous estimate's margin of error. Again, comparing this estimate to the size of the union of the fused Wikidata and ConceptNet networks implies that the experiments have explored approximately 5.77% $\pm $ 0.781% of the underlying or latent causal attribution network.
Finally, capture-recapture can also be used to measure the number of links in the underlying causal attribution network by determining if link $i\rightarrow j$ appears in two networks. Performing the same analysis as above, after incorporating NetFUSES, provides an estimate of $\hat{M} = 10\,235\,150 \pm 8\,962\,595.9$ links. This estimate possesses a relatively large confidence interval due to low observed overlap in the sets of edges. According to this estimate, $0.198\% \pm 0.174\%$ of links have been explored.
Discussion
The construction of causal attribution networks generates important knowledge networks that may inform causal inference research and even help future AI systems to perform causal reasoning, but these networks are time-consuming and costly to generate, and to date no efforts have been made to combine different networks. Our work not only studies the potential for fusing different networks together, but also infers the overall size of the total causal attribution network being explored.
We used capture-recapture estimators to infer the number of nodes and links in the underlying causal attribution network, given the Wikidata and ConceptNet networks and using NetFUSES and a semantic similarity function to help account for semantically equivalent nodes within and between Wikidata and ConceptNet. The validity of these estimates depends on Wikidata and ConceptNet being independent samples of the underlying network. As with many practical applications of capture-recapture in wildlife ecology and other areas, here we must question how well this independence assumption holds. The best way to sharpen these estimates is to introduce a new causal attribution survey specifically designed to capture either nodes or links independently (it is unlikely that a single survey protocol can sample independently both nodes and links), and then perform this same survey multiple times.
NetFUSES is a simple approach to graph fusion, in this case building off advances made in semantic representations of natural language, although any similarity function can be used to identify semantically equivalent nodes as appropriate. We anticipate that more accurate and more computationally efficient methods for graph fusion can be developed, but even the current method may be useful in a number of other problem domains.
Acknowledgments
This material is based upon work supported by the National Science Foundation under Grant No. IIS-1447634. | networks where nodes represent causes and effects, and directed edges represent cause-effect relationships proposed by humans |
6f8881e60fdaca7c1b35a5acc7125994bb1206a3 | 6f8881e60fdaca7c1b35a5acc7125994bb1206a3_0 | Q: How accurate is their predictive model?
Text: Introduction
Urban legends are a genre of modern folklore consisting of stories told as true – and plausible enough to be believed – about some rare and exceptional events that supposedly happened to a real person or in a real place.
Whether urban legends are produced by individual authors or emerge spontaneously, they typically spread “virally" across communities and tend to change over time with repetition and embellishment, like memes BIBREF0 . For example the sewer alligator, that originally “appeared” in New York City BIBREF1 , also appeared in different cities to suit regional variations. Though it is considered synonymous of “false belief," the term urban legend refers to a subtler and more complex phenomenon. The crucial factor is that the story is told as true in the absence of verification. Folklorists are generally more interested in the social context and meaning of urban legends than their truth value. From an NLP point of view, instead, it is interesting to computationally explore those linguistic characteristics that make them appealing and bring people to circulate them. With the advent of the Internet, urban legends gained new lifeblood, as they began to be circulated by e-mail.
In BIBREF2 , the authors discuss the idea of “stickiness" popularized by the book “The Tipping Point” BIBREF3 , seeking to explain what makes an idea or concept memorable or interesting. They also focus on urban legends and claim that, by following the acronym “SUCCES" (each letter referring to a characteristic that makes an idea “sticky"), it is possible to describe their prototypical structure:
Such features are allegedly placed at the core of persuasive and viral language; urban legends constitute an ideal framework with which to computationally verify these assertions. Table 1 displays a few examples of urban legends claims.
In particular we will investigate some of the prototypical characteristics that can be found in urban legends as compared to similar literary genres. In our view, urban legends are viral since they are stressed by a tension between credible and incredible: credible like a news and incredible like a fairy tale. We will focus on the idea that urban legends should mimic the details of news (who, where, when) to be credible, and they should be emotional and readable like the story of a fairy tale to be catchy and memorable. We will verify these psychological hypotheses – appeared in the literature – using NLP tools, to drive a quantitative analysis of these qualitative theories. For example, the idea that urban legends derive much of their credibility from details concerning the location where the situation took place, is presented in BIBREF4 . Anecdotically, the television series “1000 Ways to Die" – that recreates unusual supposed deaths and debunked urban legends in a way similar to the Darwin Awards – introducing each story with the location and date of each supposed incident, to render it more credible.
In the tension between credible and incredible, details should be neither too specific, like in the news, nor too few, as in fairy tales: effective urban legends should be credible but not verifiable. Similarly, emotions should be enough to make it sticky/catchy but not too much to render it not-credible. Finally urban legends should be easy to read, similar to fairy tales, to render them more memorable. As an example consider the following excerpt, taken from the “Kidney Theft” urban legend, as reported by snopes.com:
There is no very strong emotional wording in this example, it is the situation itself that is scary; on the contrary the email contains locations, the signature of a presumed Jerry Mayfield, and – noticeably – credibility is also explicitly addressed in the text with the adjectives “real”, “documented” and “confirmable”.
In the following sections we first review relevant work that addresses the problem of deceptive language and behavior both in online and offline scenarios, followed by an overview of work that addresses the virality of online content. Then we describe the data collected for our experiments and the features extracted to model the aforementioned prototypical characteristics of urban legends. We use these features in both descriptive statistics and generalization tasks and we report the best performing features. Finally we discuss future research on further prototypical characteristics of urban legends.
Related Work
The topic of deceptive and/or false messages is a burning topic within the NLP community. A seminal work on the linguistic recognition of lies can be found in BIBREF5 . Still, defense from subtle persuasive language in broadcast messages, including social networks, is needed in many applied scenarios. Viral messages have become a very important factor for persuasion and are currently almost entirely out of control. So, protection from fraudulent communication is needed, especially in competitive commercial situations. Two main approaches are currently under investigation in the literature:
1) Recognizing the linguistic characteristics of deceptive content in the social web: for example preventing deceptive consumer reviews BIBREF6 on sites like Trip Advisor is fundamental both for consumers seeking genuine reviews, and for the reputation of the site itself. Deceptive consumer reviews are fictitious opinions that have been deliberately written to sound authentic. Another example concerns online advertising BIBREF7 : detecting fraudulent ads is in the interest of users, of service providers (e.g. Google AdWords system), and other advertisers. An interesting phenomenon at the crossroad of viral phenomena and deceptive customer reviews, where ironic reviews (such as the case of the mountain three wolf moon) create phenomena of social contagion, is discussed in BIBREF8 .
2) Recognizing on-line behavioral patterns of deceptive users: For example recognizing groups of propagandists or fake accounts that are used to push the virality of content BIBREF9 . Four main patterns are recognized: (i) sending high volumes of tweets over short periods of time, (ii) retweeting while publishing little original content, (iii) quickly retweeting, and (iv) colluding with other, seemingly unrelated, users to send duplicate or near-duplicate messages on the same topic simultaneously. Another example is BIBREF10 where the authors hypothesize that there is a set of representative distributions of review rating scores. Deceptive business entities that hire people to write fake reviews can then be recognized since they will necessarily distort distribution of review scores, leaving “distributional footprints" behind.
We want to consider a third point, which is linked to the previous two but different at the same time: deceptive content that spreads quickly but without an explicit strategy of making them spread, which is the case with urban legends. Finally, the spreading dynamics of an urban legend on one hand closely resembles those of memes that undergo many variations while spreading BIBREF11 ; on the other hand their characteristics resemble those of viral content. Several researchers have studied information flow, community building and similar processes using Social Networking sites as a reference BIBREF12 , BIBREF13 , BIBREF14 . However, the great majority concentrate on network-related features without taking into account the actual content spreading within the network BIBREF15 . A hybrid approach focusing on both product characteristics and network related features is presented in BIBREF16 : in particular, the authors study the effect of passive-broadcast and active-personalized notifications embedded in an application to foster word of mouth.
Recently, the correlation between content characteristics and virality has begun to be investigated, especially with regard to textual content; in BIBREF17 , for example, features derived from sentiment analysis of comments are used to predict stories' popularity. The work in BIBREF18 uses New York Times articles to examine the relationship between emotions evoked by the content and virality, using semi-automated sentiment analysis to quantify the affectivity and emotionality of each article. Results suggest a strong relationship between affect and virality, where virality corresponds to the number of times the article was email forwarded.
The relevant work in BIBREF19 measures a different form of content spreading by analyzing which features of a movie quote make it “memorable" online. Another approach to content virality, somehow complementary to the previous one, is presented in BIBREF11 , and takes the perspective of understanding which modification dynamics make a meme spread from one person to another (while movie quotes spread remaining exactly the same). More recently, some works tried to investigate how different textual contents give rise to different reactions in the audience: the work presented in BIBREF20 correlates several viral phenomena with the wording of a post, while BIBREF21 shows that specific content features variations (like the readability level of an abstract) differentiate among virality level of downloads, bookmarking, and citations.
Datasets
To explore the characteristics of urban legends and understand the effectiveness of our ideas we collected a specific dataset. It is composed of roughly 8000 textual examples: 2518 Urban Legends (UL), 1860 Fairy Tales (FT) and 3575 Google News articles (GN). The description of how the datasets have been created follows.
Feature Extraction
After collecting the datasets we extracted four different groups of features, relevant to the prototypical characteristics we want to analyze.
Named Entities, (NE). To annotate named entities we used the TextPro toolkit BIBREF24 , and in particular its Named Entities recognition module. The output of the tool is in the IOB2 format and includes the tags Person (PER), Organization (ORG), Location (LOC) and Miscellaneous (MISC).
Temporal Expressions, (TIMEX). To annotate temporal expressions we used the toolkit TTK BIBREF25 . The output of the tool is in TimeML annotation language format BIBREF26 . In particular time expressions are flagged with TIMEX3 tags (tern.mitre.org). The tags considered are DATE, DURATION and TIME.
To compute the importance of the aforementioned features, and to explore the characteristics of urban legend texts, we used the method proposed in BIBREF5 . We calculate a score associated with a given set of entities (features), as a measure of saliency for the given word class inside the text, called coverage.
More formally, given a set of feature instances present in a text, C $=$ { $W_1$ , $W_2$ , ..., $W_N$ }, we define the feature coverage in that text (or corpus) $A$ as the percentage of words from $A$ belonging to the feature set $C$ :
$$Coverage_{A}(C) = \frac{\sum _{W_{i} \in C} Frequency_A(W_i)}{Words_{A}}$$ (Eq. 14)
where $Frequency_A(W_i)$ represents the total number of feature occurrences $W_i$ inside the text A, and $Words_{A}$ represents the total size (in words) of the text. Note that we computed features' coverage regardless of their actual length: “New York City" or “Paris” both count as one LOC even if the former is composed of three tokens while the latter only of one. Note also that this approach normalizes according to text length, avoiding biases due to different corpus characteristics.
Sentiment (SENT). Since the three corpora have different characteristics, rather than computing word polarity using specialized bag-of-words approaches, we resort to words' prior polarity - i.e. if a word out of context evokes something positive or something negative. This technique, even if less precise, guarantee that the same score is given to the same word in different contexts, and that none of the corpora is either overestimated or underestimated. To this end, we follow the methodology proposed in BIBREF27 , using SentiWordNet 3.0 BIBREF28 , that assigns prior polarities to words starting from their posterior polarities. In particular we choose the best performing approach. This formula uses a weighted mean, i.e. each sense weight is chosen according to a harmonic series. The rationale behind this choice is based on the assumption that more frequent senses should bear more “affective weight” than very rare senses when computing the prior polarity of a word. In particular, for each word we returned its positive (POS) and negative (NEG) prior polarity score:
$$\qquad POS =\frac{\sum _ {i=1}^{n}( \frac{1}{i} \times posScore_ {i})}{\sum _ {i=1}^{n}( \frac{1}{i})}$$ (Eq. 15)
where $posScore_ {i}$ represents the modulus of the positive polarity of the ith sense of that word. The NEG score is computed following the same procedure.
Emotions (EMO). To sense emotions from text we used the methodology described in BIBREF29 . The idea underlying the method is the distinction between direct and indirect affective words. For direct affective words, we refer to the WordNet Affect BIBREF30 lexicon, an extension of the WordNet database which employs six basic emotion labels (anger, disgust, fear, joy, sadness, surprise) to annotate WordNet synsets. LSA is then used to learn, in an unsupervised setting, a vector space from the British National Corpus. In the LSA space, each emotion label can be represented in various way. In particular, we employ the `LSA Emotion Synset' setting, in which the synsets of direct emotion words are considered. The affective load is computed in terms of its lexical similarity with respect to one of the six emotion labels. The overall affective load of a text is then calculated as the average of its similarity with each emotion label.
Emotions and Sentiment features are grouped under the label Affect (AFF).
Readability (READ). We further analyzed the texts in the three datasets according to readability indices, to understand whether there is a difference in the language difficulty among them. Basically, the task of readability assessment consists of quantifying how difficult a text is for a reader. This kind of assessment has been widely used for several purposes, such as evaluating the reading level of children and impaired persons and improving Web content accessibility, see for example what reported in BIBREF31 . We use three indices to compute the difficulty of a text: the Gunning Fog BIBREF32 , Flesch BIBREF33 and Kincaid BIBREF34 indices. These metrics combine factors such as word and sentence length that are easy to compute and approximate the linguistic elements that have an impact on readability. In the following formulae, $Sent_A$ represents the number of sentences in text $A$ , $Cpx_A$ the number of complex words (those with three or more syllables), and $Syll_A$ the total number of syllables.
The Fog index is a rough measure of how many years of schooling it would take someone to understand the content; higher scores indicate material that is harder to read. Texts requiring near-universal understanding have an index less than 8. Academic papers usually have a score between 15 and 20. The score, for a given text $A$ , is calculated according to the formula:
$$Fog_{A} = 0.4 \Big ( \frac{Words_A}{Sent_A} + 100 \frac{Cpx_A}{Words_A} \Big )$$ (Eq. 16)
The Flesch Index rates texts on a 100-point scale. Higher scores indicate material that is easier to read while lower numbers mark passages that are more difficult to read. Scores can be interpreted as: 90-100 for content easily understood by an average 11-year-old student, while 0-30 for content best understood by university graduates. The score is calculated with the following formula:
$$Flesch_{A} = 206.835 - 1.015 \frac{Words_A}{Sent_A} -84.6 \frac{Syll_A}{Words_A}$$ (Eq. 17)
The Kincaid Index or “Flesch–Kincaid Grade Level Formula" translates the 0-100 score of the Flesch Index to a U.S. grade level. It can be interpreted as the number of years of education required to understand this text, similar to the Gunning Fog index. The grade level is calculated with the following formula:
$$Kincaid_{A} = 0.39 \frac{Words_A}{Sent_A} + 11.8 \frac{Syll_A}{Words_A} - 15.59$$ (Eq. 18)
Descriptive Statistics
As can be seen from Tables 2 (Named Entities) and 3 (Temporal Expressions), urban legends place half-way between fairy tales and news, as we expected. While fairy tales represent out-of-time, out-of-place and always-true stories (“a long time ago in a faraway land"), news represent circumstantial description of events. This is reflected by the overall use of named entities (respectively almost three and four times more in UL and GN) and of temporal expressions (respectively almost two and three times more). Interestingly the use of person names is the only case where FT reduce the lead of UL and GN, and can be explained by the fact that characters in FT are usually addressed with proper names (e.g. “Hansel and Gretel”).
In Table 4 , statistics for sentiment and emotion coverage are reported. As can be seen, in the SENT group of features the differences are less marked and, quite surprisingly, ULs have the lowest scores. As we would expect, FTs have the highest score. Sentiment does not meet our initial expectation and seems in contrast with previous works – see for example what reported in BIBREF35 on UL and evoked emotions; still the results on sentiment as compared to emotions can be explained by the distinction between affective impact and affective language. In fact, affective impact can either derive from the wording of the text itself (usage of strong affect words), or from the depicted situation (i.e. emotions are evoked by describing a vivid situation with a plain language). In our experiment we tested the `wording' using SENT features while the `evoked emotions' with the EMO features. So, UL seem to use a plain and objective language, similar to GN, to gain credibility, but tend to evoke strong emotions (similar to FT) to be catchy. Let us consider the “Kidney Theft” excerpt described in Section 1, as stated, there is no very strong emotional wording in this UL, it is the depicted situation that is scary per se.
In Table 5 , statistics for readability are reported. As can be seen, ULs are readable in a way similar to fairy tales. Still, depending on the readability indices, that grasp different aspects of text difficulty, ULs are either slightly easier than FTs or half-way between FTs and ULs similar to the cases of Tables 2 and 3 .
This behavior can be explained by the fact that ULs have a simpler syntax than FTs but a more complex lexicon. In fact, inspecting the individual elements of the formulae, as reported in the second part of Table 5 , we see that while the percentage of complex words (either $\frac{Cpx_A}{Words_A}$ or $\frac{Syll_A}{Words_A}$ ) puts UL halfway between FT and GN, the average length of sentences ( $\frac{Words_A}{Sent_A}$ ) is surprisingly higher for FT than GN and in turn UL. So, depending on the weight given either to complex words or to sentence length, the results in Table 5 can be interpreted.
All differences in the means reported in the tables are statistically significant (Student's t-test, $p<0.001$ ) apart from TIME, between UL and FT, and DURATION, between UL and GN, (signalled with * in Table 3 ).
Turning to the analysis of variance, we see that FT is – on average – a more cohesive genre, with lower standard deviations, while GN and UL have higher and closer standard deviations. In fact, all differences in the standard deviations reported in the tables are statistically significant (f-test, $p<0.001$ ) apart between UL and GN in Fog, Kincaid and in ALL sentiment (signalled with * in the respective Tables).
Classification Experiments
The goal of our experiments is to understand to what extent it is possible to assign a text to one of the aforementioned classes using just the prototypical characteristics (features) discussed above, and whether there is a subset of features that stands out among the others in this classification task. For every feature combination we conducted a binary classification experiment with ten-fold cross validation on the dataset. We always randomly downsampled the majority class in order to make the dataset balanced, i.e. 50% of positive examples and 50% of negative examples; this accounts for a random baseline of 0.5. We also normalized all features according to z-score. Experiments were carried out using SVM BIBREF36 , in particular libSVM BIBREF37 under its default settings. Results are reported in Table 6 ; all significance tests discussed below are computed using an approximate randomization test BIBREF38 .
Urban Legends vs News. In the UL vs. GN classification task, while all the features together performed well (F1 = 0.833), improving over all other subgroups of features ( $p<0.001$ ), no single group of features performed so well, apart from READ (F1 = 0.763, $p<0.001$ ). Particularly, the temporal features (TIMEX) performed worse than AFF and NE ( $p<0.001$ ). Still, all features improved over the baseline ( $p<0.001$ ).
Urban Legends vs Fairy Tales. In the UL vs. FT classification task, all the features together performed better than the previous experiment (F1 = 0.897), again improving over all the other subgroups of features alone ( $p<0.001$ ). Interestingly, the best discriminative subgroup of features (still READ, F1 = 0.868) in this case reduces the lead with respect to all the features together (ALL) and improves over the others subgroups ( $p<0.001$ ) apart from the AFF group – from which has no significant difference – that in this case performs better than in the previous experiment. On the contrary, the TIMEX group had similar performances as the previous experiment, while NE improved its performance. Finally, all groups of features had a statistically significant improvement over the baseline ( $p<0.001$ ).
In Table 7 we report the performances of the various classification tasks in term of precision, recall and F1 over the single classes. Interestingly, for almost all feature combinations the classifiers had slightly higher precision than recall for UL, while the contrary holds for FT and GN.
News vs Fairy Tales. Finally, we wanted to check whether UL being “half-way" between GN and FT can be observed in our classification experiments as well. If this hypothesis is correct, by classifying GN vs. FT we would expect to find higher performance than previous experiments. Results show that this is in fact the case. All features together performed better than all previous experiment and incredibly well (F1= 0.978), again improving over all the other subgroups of features alone ( $p<0.001$ ) apart from READ that performs equally well (F1=0.973, no statistically significant difference). Notably, all other groups of features improves over the UL vs. GN and the UL vs. FT tasks. Finally, all groups of features had a statistically significant improvement over the random baseline ( $p<0.001$ ).
Three Class Classification. Finally we also tested feature predictivity on a three class classification task (UL vs GN vs FT). Since in this case we did not performed downsampling, we use the ZeroR classifier as a baseline. For the sake of interpretability of results, along with precision, recall and F1 we also provide the Matthews Correlation Coefficient (MCC) which is useful for unbalanced datasets, as presented in BIBREF39 for the multiclass case. MCC returns a value between -1 and +1, where +1 represents a perfect prediction, 0 no better than random and -1 indicates total disagreement. Results are consistent with previous experiments. In Table 8 , all feature configurations show an improvement over the baseline ( $p<0.001$ ) but the temporal features (TIMEX) have far lower discriminative power as compared to others groups of features (MCC=0.339).
Discussion
While between UL and GN the discrimination is given by a skillful mixture of all the prototypical features together, where none has a clear predominance over the others, between UL and FT, readability (READ) and affect (AFF) play a major role. From the summary in Table 8 we see that while ALL features together have the highest averaged F1, READ is the best performing subset of features in all experiments, followed by AFF, NER and TIMEX that perform reasonably well. n general, these experiments proved the goodness of our features in discriminating UL against FT and GN in a machine learning framework, confirming the results emerged from the quantitative analysis part. In particular, as expected, these features gave the best results in the GN vs FT experiments, showing that these two genres represent the extremes of a continuum where ULs are placed.
Ecological Experiments in the News Domain
As a final validation of our feature importance we also set up experiments where we controlled for the medium where the message is delivered, specifically the online news domain. Since Newspapers exist for all kinds of stories and with all sorts of reputations for reliability we focused on two specific websites. One is the Weekly World News (WWN), a news website with very low reliability where many of the stories have the qualities of urban legends (the WWN was famous for stories about Bigfoot and UFOs, etc.). The other website is The New York Times (NYT), known for its high reliability and fact-checking procedures.
We scraped the WWN for a total of 225 stories, and then randomly selected an equal amount of stories from the NYT. For both datasets we extracted the same set of features discussed in the previous sections. For every feature combination we conducted a binary classification experiment with ten-fold cross validation on the dataset. Since the dataset is balanced, this accounts for a random baseline of 0.5. We also normalized all features according to z-score. Results are reported in Table 9 .
Also in this case our features are able to discriminate between reliable and non-reliable stories (namely those coming from NYT and WWN). In particular, all the features together performed very well (F1 = 0.864), improving over all other subgroups of features (p $<$ 0.001), and NE, TIMEX, READ performed equally well improving over AFF that was the least effective (p $<$ 0.001). Still, AFF improves over the random baseline (p $<$ 0.001).
With this last experiment we were able to show that stories from different newspapers of differing reliability might be classified correctly using the features learned for discriminating regular news from urban legends. So, also in more applicative and ecological scenarios, where stories come from the same medium (online news) these features are useful in discriminating stories on the basis of their UL-ness or GN-ness.
Conclusions
In this paper we have presented a study on urban legends, a genre of modern folklore consisting of stories about some rare and exceptional events plausible enough to be believed. We argued that urban legends represent a form of “sticky” deceptive text, marked by a tension between the credible and incredible. To be credible they should resemble a news article while being incredible like a fairy tale. In particular we focused on the idea that ULs should mimic the details of news (who, where, when) to be credible, while being emotional and readable like a fairy tale to be catchy and memorable. Using NLP tools we presented a quantitative analysis of these simple yet effective features and provided some machine learning experiments showing that it is possible to recognize an urban legend using just these prototypical characteristics. In the future we want to explore other prototypical aspects of urban legends like, for example, linguistic style BIBREF40 , BIBREF41 . With regard to sentiment, besides the simple word polarities we used, we will explore the emotions expressed in UL, FT and GN, using an approach similar to the one described in BIBREF29 . Exploiting knowledge-based and corpus-based methods, that approach deals with automatic recognition of affect, annotating texts with six basic emotions. We believe that fine-grained emotion annotation of urban legends could shed more light in the understanding the mechanisms behind persuasive language. | Unanswerable |
6a7370dd12682434248d006ffe0a72228c439693 | 6a7370dd12682434248d006ffe0a72228c439693_0 | Q: How large language sets are able to be explored using this approach?
Text: Introduction
The need to uncover presumed underlying linguistic evolutionary principles and analyse correlation between world's languages has entailed this research. For centuries people have been speculating about the origins of language, however this subject is still obscure. Non-automated linguistic analysis of language relationships has been complicated and very time-consuming. Consequently, this research aims to apply a computational approach to compare human languages. It is based on the phonetic representation of certain key words and concept. This comparison of word similarity aims to facilitate the grouping of languages and the analysis of the formation of genealogical relationship between languages.
This report contains a thorough description of the proposed methods, developed techniques and discussion of the results. During this projects several collections of words were gathered and examined, including colour words and numbers. The methods included edit distance, phonetic substitution table, hierarchical clustering with a cut and other analysis methods. They all aimed to provide an insight regarding both technical data summary and its visual representation.
Background ::: Human languages
For centuries, people have speculated over the origins of language and its early development. It is believed that language first appeared among Homo Sapiens somewhere between 50,000 and 150,000 years ago BIBREF0. However, the origins of human language are very obscure.
To begin with, it is still unknown if the human language originated from one original and universal Proto-Language. Alfredo Trombetti made the first scientific attempt to establish the reality of monogenesis in languages. His investigation concluded that it was spoken between 100,000 and 200,000 years ago, or close to the first emergence of Homo Sapiens BIBREF1. However it was never accepted comprehensively. The concept of Proto-Language is purely hypothetical and not amenable to analysis in historical linguistics.
Furthermore, there are multiple theories of how language evolved. These could be separated into two distinctly different groups.
Firstly, some researchers claim that language evolved as a result of other evolutionary processes, essentially making it a by-product of evolution, selection for other abilities or as a consequence of yet unknown laws of growth and form. This theory is clearly established in Noam Chomsky BIBREF2 and Stephen Jay Gould's work BIBREF3. Both scientists hypothesize that language evolved together with the human brain, or with the evolution of cognitive structures. They were used for tool making, information processing, learning and were also beneficial for complex communication. This conforms with the theory that as our brains became larger, our cognitive functions increased.
Secondly, another widely held theory is that language came about as an evolutionary adaptation, which is when a population undergoes a change in process over time to survive better. Scientists Steven Pinker and Paul Bloom in “Natural Language and Natural Selection” BIBREF4 theorize that a series of calls or gestures evolved over time into combinations, resulting in complex communication.
Today there are 7,111 distinct languages spoken worldwide according to the 2019 Ethnologue language database. Many circumstances such as the spread of old civilizations, geographical features, and history determine the number of languages spoken in a particular region. Nearly two thirds of languages are from Asia and Africa.
The Asian continent has the largest number of spoken languages - 2,303. Africa follows closely with 2,140 languages spoken across continent. However, given the population of certain areas and colonial expansion in recent centuries, 86 percent of people use languages from Europe and Asia. It is estimated that there is around 4.2 billion speakers of Asian languages and around 1.75 billion speakers of European languages.
Moreover, Pacific languages have approximately 1,000 speakers each on average, but altogether, they represent more than a third of our world’s languages. Papua New Guinea is the most linguistically diverse country in the world. This is possibly due to the effect of its geography imposing isolation on communities. It has over 840 languages spoken, with twelve of them lacking many speakers. It is followed by Indonesia, which has 709 languages spoken across the country.
Background ::: Human languages ::: Indo-European languages and Kurgan Hypothesis
Indo-European languages is a language family that represents most of the modern languages of Europe, as well as specific languages of Asia. Indo-European language family consist of several hundreds of related languages and dialects. Consequently, it was an interest of the linguists to explore the origins of the Indo-European language family.
In the mid-1950s, Marija Gimbutas, a Lithuanian-American archaeologist and anthropologist, combined her substantial background in linguistic paleontology with archaeological evidence to formulate the Kurgan hypothesis BIBREF5. This hypothesis is the most widely accepted proposal to identify the homeland of Proto-Indo-European (PIE) (ancient common ancestor of the Indo-European languages) speakers and to explain the rapid and extensive spread of Indo-European languages throughout Europe and Asia BIBREF6 BIBREF7. The Kurgan hypothesis proposes that the most likely speakers of the Proto-Indo-European language were people of a Kurgan culture in the Pontic steppe, by the north side of the Black Sea. It also divides the Kurgan culture into four successive stages (I, II, III, IV) and identifies three waves of expansions (I, II, III). In addition, the model suggest that the Indo-European migration was happening from 4000 to 1000 BC. See figure FIGREF4 for visual illustration of Indo-European migration.
Today there are approximately 445 living Indo-European languages, which are spoken by 3.2 billion people, according to Ethnologue. They are divided into the following groups: Albanian, Armenian, Baltic, Slavic, Celtic, Germanic, Hellenic, Indo-Iranian and Italic (Romance) FIGREF3 BIBREF8.
Background ::: Human languages ::: Brittonic languages
Brittonic or British Celtic languages derive from the Common Brittonic language, spoken throughout Great Britain south of the Firth of Forth during the Iron Age and Roman period. They are classified as Indo-European Celtic languages BIBREF10. The family tree of Brittonic languages is showed in Table TABREF6. Common Brittonic is ancestral to Western and Southwestern Brittonic. Consequently, Cumbric and Welsh, which is spoken in Wales, derived from Western Brittonic. Cornish and Breton, spoken in Cornwall and Brittany, respectively, originated from Southwestern side.
Today Welsh, Cornish and Breton are still in use. However, it is worth to point out that Cornish is a language revived by second-language learners due to the last native speakers dying in the late 18th century. Some people claimed that the Cornish language is an important part of their identity, culture and heritage, and a revival began in the early 20th century. Cornish is currently a recognised minority language under the European Charter for Regional or Minority Languages.
Background ::: Human languages ::: Sheep Counting System
Brittonic Celtic language is an ancestor to the number names used for sheep counting BIBREF11 BIBREF12. Until the Industrial Revolution, the use of traditional number systems was common among shepherds, especially in the fells of the Lake District. The sheep-counting system was referred to as Yan Tan Tethera. It was spread across Northern England and in other parts of Britain in earlier times. The number names varied according to dialect, geography, and other factors. They also preserved interesting indications of how languages evolved over time.
The word “yan” or “yen” meaning “one”, in some northern English dialects represents a regular development in Northern English BIBREF13. During the development the Old English long vowel // <ā> was broken into /ie/, /ia/ and so on. This explains the shift to “yan” and “ane” from the Old English ān, which is itself derived from the Proto-Germanic “*ainaz” BIBREF14.
In addition, the counting system demonstrates a clear connection with counting on the fingers. Particularly after numbers reach 10, as the best known examples are formed according to this structure: 1 and 10, 2 and 10, up to 15, and then 1 and 15, 2 and 15, up to 20. The count variability would end at 20. It might be due to the fact, that the shepherds, on reaching 20, would transfer a pebble or marble from one pocket to another, so as to keep a tally of the number of scores.
Aims and Objectives ::: Overall Aim
The aim of this research was to develop computational methods to compare human languages based on the phonetic form of single words (i.e. not exploiting grammar). This comparison of word similarity aims to facilitate the grouping of languages, the identification of the the presumed underlying linguistic evolutionary principles and the analysis of the formation of genealogical relationship between languages.
Aims and Objectives ::: Specific Objectives
Devise a way to encode the phonetic representation of words, using:
an in-house encoding,
an IPA (International Phonetic Alphabet).
Develop methods to analyze the comparative relationships between languages using: descriptive and inferential statistics, clustering, visualisation of the data, and analysis of the results.
Implement a repeatable process for running the analysis methods with new data.
Analyse the correlation between geographical distance and language similarity (linguistic distance), and investigate if it explains the evolutionary distance.
Examine which words exhibit more or less variation and the likely causes of it.
Explore which words are preserved better across the same language group and possible reasons behind it.
Explore which language group preserves particular words more in comparison to others and potential reasons behind it.
Determine if certain language groups are correct and exploit the possibility of forming new ones.
Data ::: Language files
Language file or database is a set of languages, each of which is associated with an ordered list of words. All lists of words for a particular data set have the same length. For example:
numbers(romani,[iek,dui,trin,shtar,panj,shov,efta,oksto,ena,desh]).
numbers(english,[wun,too,three,foor,five,siks,seven,eit,nine,ten]).
numbers(french,[un,de,troi,katre,sink,sis,set,wuit,neuf,dis]).
Words and languages are encoded in this format for later use of Prolog. In Prolog each “numbers” line is a fact, which has 2 arguments; the first is the language name and the second is a list (indicated in between square brackets) of words. Words can be written down in their original form or encoded phonetically (as shown in the example). Where synonyms for a word are known, then the word itself is represented by a list of the synonym words. In the example below, Lithuanian, Russian and Italian have two words for the English `blue':
words(english,[black,white,red,yellow,blue,green]).
words(lithuanian,[juoda,balta,raudona,geltona,[melyna,zhydra],zhalia]).
words(russian,[chornyj,belyj,krasnyj,zholtyj,[sinij,goluboj],zeljonyj]).
words(italian,[nero,bianco,rosso,giallo,[blu,azzurro],verde]).
The main focus of this research was exploring words phonetically. Consequently, special encoding was used. It consisted of encoding phonemes by using only one letter; incorporating capital letters for encoding different sounds (See table TABREF21).
Table TABREF22 summarises the language files that are obtained at the moment.
Data ::: Sheep ::: Sheep counting words
Sheep counting numbers were extracted from “Yan Tan Tethera” BIBREF12 page on Wikipedia and placed in a Prolog database. Furthermore, data was encoded phonetically using the set of rules provided by Prof. David Gilbert.
In the given source, number sets ranged from 1-3 to 1-20 for different dialects. The initial step was to reduce the size of the data to sets of numbers 1-10. This way aiming:
to have Prolog syntax without errors (avoided “-”, “ ” as they were common symbols after numbers reached 10);
to avoid the effects of different methods of forming and writing down numbers higher than 10. (Usually they were formed from numbers 1-10 and a base. However, they were written in a different order, making the comparison inefficient.)
In addition, the Wharfedale dialect was removed since only numbers 1-3 were provided; the Weardale dialect was eliminated as it had a counting system with base 5. Consequently, the final version of sheep counting numbers database consisted of 23 observations (dialects) with numbers 1-10.
Data ::: Sheep ::: Geographical data
In order to enable the analysis of linguistic and geographical distance relationship, a geographical distance database was created. It was done by firstly creating a personalized Google Map with 23 pins, noting the places of different dialects (they were located approximately in the middle of the area) (Figure: FIGREF28). Subsequently, pairwise distances were calculated between all of them (taking walking distance) and added to the database for further use.
Data ::: Colours
Colour words were extracted from “Colour words in many languages” BIBREF15 page on Omniglot, collected from people and dictionaries. In addition, data was encoded phonetically using the set of rules provided by Prof. David Gilbert.
The latest version of the database consisted of 42 different languages, each containing 6 colours: black, white, red, yellow, blue, green. For the purposes of analysis the following groups were created:
All languages - “ColoursAll” (42 languages)
Indo-European languages - “ColoursIE” (39 languages)
Germanic languages - “ColoursPGermanic” (10 languages)
Romance languages - “ColoursPRomance” (11 languages)
Germanic and Romance languages - “ColoursPG_R” (21 languages)
Data ::: IPA
“Automatic Phonemic Transcriber” BIBREF16 was used to create 3 IPA encoded databases:
“BasicWords” - words in their original form were taken from Prof. David Gilbert's database for basic words (including: sun, moon, rain, water, fire, man, woman, mother, father, child, yes, no, blood).
“Numbers” - numbers from 1-10 in their original form were taken from Prof. David Gilbert's small database of numbers.
“Colours” - words were taken from the above mentioned database (including words: black, white, red, yellow, blue, green).
Each of the above mentioned databases consisted of 3 languages: English, Danish and German (these were the languages the Automatic Phonemic Transcriber provided) all encoded in IPA.
As the research progressed, the difficulty of obtaining IPA encoding for different languages was faced. This study could not find a cross-linguistic IPA dictionary that included more than 3 languages. Consequently, the question of its existence was raised.
Methodology
There are two main processes to be carried out.
The first process (Figure: FIGREF43) aims to analyse a databases of words; explore which words exhibit more or less variation, which words are more preserved; examine how languages could be grouped based on linguistic distances of words.
It begins with the calculation of pairwise linguistic distances for the given database of words. A Phonetic Substitution Table is used to assign weights during the calculation and could possibly be modified. The result is a new distance table which is analysed in the following ways:
Performing “densityP” function. The outcome is density plots for every word of a database.
Performing Hierarchical clustering. After, the “Best cut” is determined, which is either the best Silhouette value after calculation of all possible cases, or a forced number K which is a number of words per language in the language file
Calculating Bhattacharya coefficients.
Performing “mean_SD” function.
The second process (Figure: FIGREF44) targets to investigate the relationship between two sets of distance data. In this research, it was applied to analyse the relationship between linguistic and geographical distances.
It starts with producing two pairwise distance tables: one of them is calculated geographical distances, another one is calculated linguistic distances. Then the data from both tables is combined into a data frame for regression analysis in R. The outcome is an object of the class “lm” (result of R function “lm” being used), that is used for data analysis, and a scatter plot with a regression line for visual analysis.
Both processes have been automated, see Section SECREF66.
Methods ::: Edit Distance
For the purposes of this research Edit distance (a measure in computer science and computational linguistics for determining the similarity between 2 strings) was calculated based on Levenshtein distance metric. This metric between two strings is the minimum number of single-character edits, operations including: insertions, deletions or substitutions.
The Levenshtein distance between two strings a,b (of length $\mid a\mid $ and $\mid b\mid $ respectively) is given by $lev_{a,b}(\mid a \mid , \mid b \mid )$ where
where $1_{(a_{i}\ne b_{j})}$ is the indicator function equal to 0 when $a_{i}=b_{j}$ and equal to 1 otherwise. A normalised edit distance between two strings can be computed by
Edit distance was implemented by Prof. David Gilbert using dynamic programming in SWI Prolog BIBREF17. The program was used to compare two words with the same meaning from different languages. When pairwise comparing two words where either one or both comprise synonyms, all the alternatives for each word one one language are compared with the corresponding (set) of words in the other language, and the closest match is selected. In addition, all to all comparisons were made, i.e. edit distance was calculated for words having different meaning as well. Finally, the edit distance for two languages represented by two lists of equal length of corresponding words was computed by taking the average of the edit distance for each (corresponding) pair of words.
An example of pairwise alignments is for the pair of words overa-hofa, where 3 alignments are produced with the use of gap penalty $=1$ and substitution penalties $f \leftrightarrow v = 0.2$, $e \leftrightarrow o = 0.2$ and all other mismatches 1:
[[-,h],[o,o],[v,f],[e,-],[r,-],[a,a]]
[[o,-],[v,h],[e,o],[r,f],[a,a]]
[[o,h],[v,-],[e,o],[r,f],[a,a]]
each with the raw edit distance of 3.2, and the normalised edit distance of
For the sake of clarity we can write the first alignment for example as
where only 3 letters are directly aligned.
Methods ::: Phonetic Substitution Table
In order to give a specified weight for different operations (insertion, deletion and substitution) Phonetic Substitution Table was created by incorporating Grimm's law BIBREF18 and extending it in-house.
Grimm's Law, principle of relationships in Indo-European languages, describes a process of the regular shifting of consonants in groups. It consist of 3 phases in terms of chain shift BIBREF19.
Proto-Indo-European voiceless stops change into voiceless fricatives.
Proto-Indo-European voiced stops become voiceless stops.
Proto-Indo-European voiced aspirated stops become voiced stops or fricatives.
This is an abstract representation of the chain shift:
$bh > b > p > $
$dh > d > t > $
$gh > g > k > x$
$gwh > gw > kw > xw$
Figure FIGREF54 illustrates how further consonant shifting following Grimm's law affected words from different languages BIBREF20.
Phonetic substitution table was extended in-house by adding more shifts. In addition, it was also written in the way to work with the special encoding described in SECREF20 section. Find the full table “editable” in Appendix SECREF11.
Another phonetic substitution table, called “editableGaby”, was made (See Appendix SECREF11). It was extended by adding pairs like “dzh” and “zh”; “dzh” and “ch”; “kh” and “g”; as well as “H”(sound of e.g. spannish/portuguese “j”) with “kh”, “g”, “k”, “h”. In addition, some of the weights were changed for certain pairs for experimental purposes.
Methods ::: Hierarchical Clustering ::: Using the OC program
The OC program BIBREF21 is general purpose hierarchical cluster analysis program. It outputs a list of the clusters and optionally draws a dendrogram in PostScript. It requires complete upper diagonal distance or similarity matrix as an input.
Methods ::: Hierarchical Clustering ::: Using R
Hierarchical clustering in R was performed by incorporating clustering together with Silhouette value calculation and cut performance.
In order to fulfill agglomerative hierarchical clustering more efficiently, we created a set of functions in R:
“sMatrix” - Makes a symmetric matrix from a specified column. The function takes a specifically formatted data frame as an input and returns a new data frame. Having a symmetric matrix is necessary for “silhouetteV” and “hcutVisual” functions.
“silhouetteV” - Calculates Silhouette values with “k” value varying from 2 to n-1 (n being the number of different languages/number of rows/number of columns in a data frame). The function takes a symmetric distance matrix as an input and returns a new data frame containing all Silhouette values.
“hcutVisual” - Performs hierarchical clustering and makes a cut with the given K value. Makes Silhouette plot, Cluster plot and dendrogram. Returns a “hcut” object from which cluster assignment, silhouette information, etc. can be extracted.
It is important to note that K-Means clustering was not performed as the algorithm is meant to operate over a data matrix, not a distance matrix.
Methods ::: Further analysis with R
Another set of functions was created to analyse collected data further. They target to ease the comparison of the mean, standard deviation, Bhattacharya coefficient within the words or language groups. Including:
“mean_SD” - Calculates mean, standard deviation, product of the mean and the SD multiplication for every column of the input. Visualises all three values for each column and places it in one plot, which is returned.
“densityP” - Makes a density plot for every column of the input and puts it in one plot, which is returned.
“tscore” - Calculates t-score for every value in the given data frame. (T-score is a standard score Z shifted and scaled to have a mean of 50 and a standard deviation of 10)
“bhatt” - Calculates Bhattacharya coefficient (the probability of the two distributions being the same) for every pair of columns in the data frame. The function returns a new data frame.
Methods ::: Process automation
In order to optimise and perform analysis in the most time-efficient manner processes of comparing languages were automated. It was done by creating two shell scripts and an R script for each of them.
The first shell script named “oc2r_hist.sh” was made to perform hierarchical clustering with the best silhouette value cut. This script takes a language database as an input and performs pairwise distance calculation. It then calls “hClustering.R” R script, which reads in the produced OC file, performs hierarchical clustering and calculates all possible silhouette values. Finally, it makes a cut with the number of clusters, which provides the highest silhouette value. To enable this process the R script was written by incorporating the functions described in section SECREF57. The outcome of this program is a table of clusters, a dendrogram, clusters' and silhouette plots.
The second shell script called “wordset_make_analyse.sh” was made to perform calculations of mean, standard deviation, Bhattacharya scores and produce density plots. This script takes a language database as an input and performs pairwise distance calculations for each word of the database. It then calls “rAnalysis.R” R script, which reads in the produced OC file and performs further calculations. Firstly, it calculates mean, standard deviation and the product of both of each word and outputs a histogram and a table of scores. Secondly, it produces density plots of each word. Finally, it converts scores into T-Scores and calculates Bhattacharya coefficient for every possible pair of words. It then outputs a table of scores. To enable this process the R script was written by incorporating the functions described in section SECREF61.
Finally, both of the scripts were combined to minimise user participation.
Results ::: Sheep
The sheep counting database was evaluated in the following ways:
Obtaining average pairwise linguistic distance, pairwise linguistic distance of subsets (different words),
Performing all to all comparison (where linguistic distance is calculated between words with different meaning, as well as with the same),
Collecting geographical data and comparing relationship between linguistic and geographical distances.
Upon generation of the above mentioned data, the methods defined in SECREF6 section were used.
Results ::: Sheep ::: Analysis of average and subset linguistic distance
After applying functions “mean_SD” (Figure: FIGREF72) and “densityP” (Figure: FIGREF73) to the linguistic distances of every word (numbers 1 to 10) in R, the following observations were made. First of all, the most preserved number across all dialects was “10” with distance mean 0.109 and standard deviation 0.129. Numbers “1”, “2”, “3”, “4” had comparatively small distances, which might be the result of being used more frequently. On the other hand, number “6” showed more dissimilarities between dialects than other numbers. The mean score was 0.567 and standard deviation - 0.234. The product scores of mean and standard deviation helped to evaluate both at the same time. Moreover, density plots showed significant fluctuation and tented to have a few peaks. But in general, conformed with the statistics provided by “mean_SD”.
Results ::: Hierarchical clustering
Hierarchical clustering was performed with the best Silhouette value cut (Figure FIGREF76). The Silhouette value suggested making 9 clusters. In this grouping, the most interesting observation was that Welsh, Breton and Cornish languages were placed together. It conforms with the fact that all 3 languages descended directly from the Common Brittonic language spoken throughout Britain before the English language became dominant.
Results ::: Hierarchical clustering ::: All to all comparison analysis
To enable analysis of clusters of all to all comparison, hierarchical clustering was performed. This was done by two different approaches: calculating a silhouette value and choosing the number of clusters accordingly; forcing a function to make 10 clusters due to having numbers from 1 to 10 in the sheep counting database.
By using function “silhouetteV” silhouette values were calculated for all possible $k$ values. The returned data frame indicated the best number of clusters being 70 (see Appendix SECREF120 for dendrogram and cluster plot). The suggested clusters were not distinguished with very high clarity in terms of numbers 1-10 perfectly, but they were comparatively good. A pattern that numbers, which had lower mean and standard deviation scores, would result in purer clusters was noticed. Clusters of numbers “1”, “2”, “3”, “4”, “5” and “10” were not as mixed as “6”, “7”, “8”, “9”.
Another way of looking at all to all comparison data was by producing 10 clusters. It was done by using “hcutVisual” and “cPurity” function (see Appendix SECREF120 cluster plot). The results showed high impurities of clusters (Figure: FIGREF78). Two out of ten clusters were pure, both containing number “5”. Another relatively pure cluster was composed of number “10” and two entries of number “2”. The rest consisted of up to 7 different numbers. This shows that sheep counting numbers in different dialects are too different to form 10 clusters containing each number. However, considering the possibility that dialects were grouped and clustering was performed to the smaller groups, they would have reasonably pure clusters. Exploring this grouping options could be a subject for further work.
Results ::: Hierarchical clustering ::: Linguistic and Geographical distance relationship
In order to investigate the correlation between linguistic and geographical distance, “lm” function was performed and a scatter plot was created. The regression line in the scatter plot suggested that the relationship existed. However, the R-squared value, extracted from the “lm” object, was equal to 0.131. This indicated that relationship existed, but was not significant.
One assumption made was that Cornish, Breton and Welsh dialects might have had a weakening effect on the relationship, since they had large linguistic distances compared to other dialects. However this assumption could not be validated as the correlation was less significant after eliminating them. This highlights that although these dialects had large linguistic distance scores, they also had big geographical distances that do not contradict the relationship.
In addition, comparison was done between linguistic distance and
$Log_{10}(\text{GeographicalDistance})$. This resulted in an even weaker relationship with R-squared being 0.097.
Results ::: Colours
The Colours database was evaluated three different ways: getting average pairwise linguistic distance, subset pairwise linguistic distance for every word and performing all to all comparison to all groups (All languages, Indo-European, Germanic, Romance, Germanic and Romance languages). After the above mentioned data was generated, the previously defined methods were applied.
Results ::: Colours ::: Mean and Standard Deviation
When examining the data calculated for “ColoursAll” none of the colours showed a clear tendency to be more preserved than others (Figure: FIGREF83). All colours had large distances and comparatively small standard deviation when compared with other groups. Small standard deviation was most likely the result of most of the distances being large.
Indo-European language group scores were similar to “ColoursAll”, exhibiting slightly larger standard deviation (Figure: FIGREF84). Conclusion could be drawn that words for color “Red” are more similar in this group. The mean score of linguistic distances was 0.61, and SD was equal to 0.178, when average mean was 0.642 and SD 0.212. However, no colour stood out distinctly.
Germanic and Romance language groups revealed more significant results. Germanic languages preserved the colour “Green” considerably (Figure: FIGREF85). The mean and SD was 0.168 and 0.129, when on average mean was reaching 0.333 and SD 0.171. In addition, the colour “Blue” had favorable scores as well - mean was 0.209 and SD was 0.106. Furthermore, Romance languages demonstrated slightly higher means and standard deviations, on average reaching 0.45 and 0.256 (Figure: FIGREF86). Similarly to Germanic, the most preserved colour word in Romance languages was “Green” with a mean of 0.296 and SD of 0.214. It was followed by words for “Black” and then for “Blue”, both being quite similar.
Results ::: Colours ::: Density Plots
Density plots of all languages and Indo-European languages were similar: both having multiple peaks with the most density around scores of 0.75 (big linguistic distance). Moreover, Germanic languages density distribution consisted of two peaks for words “White”, “Blue” and “Green” (Figure: FIGREF88). This could possibly be the result of certain weighting in the Phonetic Substitution Table or indicate possible further grouping of languages. The color “Black” had more normal distribution and smoother bell shape compared to others. Furthermore, Romance languages also obtained density plots with two peaks for words “White”, “Yellow”, “Blue” (Figure: FIGREF89). In contrast, “Black”, “Red” and “Green” distributions were quite smooth.
In order to experiment how the Phonetic Substitution Table affects the linguistic distances, “densityP” function was applied to the linguistic distances calculated with “GabyTable” substitution table. The aim was to eliminate the two peaks in the Germanic language group for word “Green”. In Germanic languages word for green tended to begin with either “gr” or “khr” (encoded as “Kr”) - both sounding similar phonetically. However, in the original substitution table, a weight for changing “K”(kh) to “g” (and the other way around) did not exist. Consequently, a new table was implemented with this substitution. This change resulted in notably smaller linguistic distances - the mean for the word “Green” was 0.099. However, it did not solve the occurrence of two peaks. The density of “Green” again had two main peaks, but differently distributed compared to the previous case.
Results ::: Colours ::: Bhattacharya Coefficients
Bhattacharya coefficients were calculated within each group for different pairs of colours. This helped to evaluate which colours were closer in distribution. In addition, hierarchical clustering was done with Bhattacharya coefficients (find the dendrograms in the Appendix SECREF123). However, the potential meaning behind the results was not fully examined.
Another potential use of Bhattacharya coefficients is their application to the same word from a different language group. As a result, the preservation of particular words can be analysed across language groups, enabling to compare and evaluate potential reasons behind it.
Results ::: Colours ::: Hierarchical Clustering
Hierarchical clustering with the best Silhouette value cut was performed in R for every group of formed language groups: all languages, Indo-European, Romance, Germanic, and both Germanic and Romance together. It is important to note that the results of the language group “Romance and Germanic” will not be discussed as it was used more for testing purposes and as expected resulted in a K=2 cut. After making the cut, one cluster consisted of Romance languages and another consisted of Germanic languages.
To begin with, clustering of all languages showed some interesting results that complied with the grouping of the languages (find the dendrogram in Figure: FIGREF92).
The suggested cut by Silhouette value was 23. Some of the clusters were more a coincidence than the actual similarity of languages and did not correspond with the existing language grouping. Despite that, most of the clusters resulted in the actual language groups, or languages closely related. To begin with, Baltic Romani, Punjabi' and Urdu were placed in the same cluster. Even though Baltic Romani is far away from South Asia geographically, it is believed to have originated from this area. Xhosa and Zulu formed another cluster both being the languages of the Nguni branch and spoken in South Africa. Hawaiian, Malagasy and Maori languages were grouped together and they all belong to Austronesian ethnolinguistic group BIBREF22 (see figure FIGREF93).
Sinhala (language of Sinhalese people, who make up the largest ethnic group in Sri Lanka), Dhivehi (spoken in Maldives) and Maldivian languages fell in the same group after the cut. They all are spread across islands in the Indian Ocean. Estonian and Finnish both being representatives of the Uralic language family were the same cluster.
Moreover, clusters of Indo-European languages were quite pure as well (groups are visible in the dendrogram of all languages, however for clarity see figure FIGREF94). There were four larger groups that stood out. First of all, the group of Germanic languages was produced accurately. It consisted of Faroese, Icelandic, German, Luxemburgish, Yiddish, English, Norwegian, Swedish, Afrikaans and Dutch. All of these languages are considered to be in the branch of Germanic languages. Another cluster was Slavic languages, which consisted of Croatian, Polish, Russian, Slovenian, Czech, Slovak and Lithuanian. Lithuanian and Latvian, according some sources, are considered to be in a separate branch, known as Baltic languages. On the other hand, in other sources they are regarded as Slavic languages. In this case, in terms of colour words Lithuanian was appointed to the Slavic languages, whereas Latvian formed a cluster on its own. In relation to Romance languages, these were divided into two groups. The first one was made of Ladino (language that derived from medieval Spanish), Spanish(Castilian), Galician and Portuguese, forming a group of the Western Romance languages. The second one consisted of Sicilian, Italian, Neapolitan, Catalan and Romanian and could be called a group of Mediterranean Romance languages.
Furthermore, clustering results of the Germanic languages file (Figure: FIGREF95) show high relation with geographical prevalence of the languages and language development history. German, Luxembourgish (has similarities with other varieties of High German languages) and Yiddish (a High German-based language) were all in the same cluster. Also, Afrikaans and Dutch were placed in the same group, and it is known that Afrikaans derived from Dutch vernacular of South Holland in the course of 18th century. Other clusters included Faroese and Icelandic, Swedish and Norwegian, as well as English forming a cluser on its own.
Finally, when looking at the clusters of Romance languages file (Figure FIGREF96) it is evident that one cluster, consisting of Ladino, Spanish, Galician and Portuguese, remained the same as in “ColoursAll”, “ColoursIE”. Another cluster that was formed from Romance languages in these databases was broken down into 3 clusters during separate clustering of Romance languages. Romanian and Catalan formed clusters on their own and Italian, Neopolitan and Sicilian were members of another cluster. These three languages were close geographically.
Results ::: IPA
Hierarchical Clustering was performed to all three IPA databases and compared with the results of hierarchical clustering of in-house phonetically encoded databases (they were created by taking subsets of German, English and Danish languages from “Basic Words”, “Numbers Small Collection” and “Colours” databases). The first characteristic noticed was that both IPA and non-IPA databases had the same grouping of languages. This shows evidence towards substantiated phonetic encoding done in-house. Another noted tendency was that pairwise linguistic distance scores tented to be higher for IPA databases. This might be due to some graphemes being written with a few letters in IPA databases, while phonetic encoding done in-house expressed graphemes as one symbol.
Potential further work would be generating an IPA-designated Phonetic Substitution table (so far clustering has been done with “editable”) and running the routines with the new weight table. Also, complementing the IPA databases with more languages would be an important step towards receiving more accurate results.
Results ::: Small Numbers ::: All to all comparison
Analysis was carried out in two ways. First of all, hierarchical clustering was performed with the best silhouette value cut. For this data set best silhouette value was 0.48 and it suggested making 329 clusters. Clusters did not exhibit high purity. However, the ones that did quite clearly corresponded to unique subgroups of language families.
Another way of looking at all to all comparison data was by producing 10 clusters. The anticipated outcome was members being distinguished by numbers, forming 10 clean clusters. However, all the clusters were very impure and consisting multiple different numbers. This might be due to different languages having phonetically similar words for different words, in this case.
All to all pairwise comparison could be an advantageous tool when used for language family branches or smaller, but related subsets. It could validate if languages belong to a certain group.
Conclusions
This project has aimed to develop computational methods to analyse and understand connections between human languages.
The project included collecting words from different languages in order to form new databases, forming rules for phonetic encoding of words and adjusting phonetic substitution table. Several computational methods of calculating pairwise distance between two words were taken, including average, subset and all to all words distance calculation. It was done by incorporating edit distance and phonetic substitution table, and implementing it in SWI Prolog. This was followed by detailed analysis of distance scores, which was conducted by the specific automated routines and developed R functions. They enabled performing hierarchical clustering with a cut either according to silhouette value, or to specified K value. They provided summary of mean, standard deviation and other statistics, like Bhattacharya scores. All these techniques delivered a thorough analysis of data and the automation of processes ensured they were used efficiently.
The resulting outcome of analysis of old sheep counting systems in different English dialects was the observation that numbers “1”,“2”,“3”,“4” and “10” were more uniform within different dialects than others, posing that they might have been the most frequently used ones. Analysis of all to all comparison did not provide pure clusters and shows that sheep counting numbers in different dialects are too different to form 10 clusters containing each number. This suggests that dialects should be grouped into subsets. Furthermore, hierarchical clustering with the best silhouette cut suggested the potential 9 groups, which consist members with the most similar counting words. Surprisingly, it was not entirely based on location. This corresponded with the difficulty of finding relationship between geographic and linguistic distance, the conducted tests showed it was insignificant.
Analysis of colour words revealed that within Indo-European languages words for colour red were moderately more preserved. Both Germanic and Romance language groups tended to have considerably more uniform words for green and blue colours. In addition, Romance language group preserved colour black reasonably well. Analysis of linguistic distances distribution showed multiple peaks within words for various language groups, suggesting that further language grouping could be done. Furthermore, the resulting outcome of hierarchical clustering with silhouette cut was known and officially accepted language families. Most of the clusters were subgroups of existing language families. Some of them suggested different sub-grouping according to colour words (e.g. Lithuanian was appointed to Slavic languages, while Latvian formed cluster on its own).
IPA databases resulted in the same relationships between languages as non-IPA phonetically encoded databases. However, to fully explore the potential of IPA-encoded databases they ought to be expanded and a customized weights table should be created.
In conclusion, this project resulted in creation of several felicitous computational techniques to explore many languages and their correlation all at once.
Further Work
One of the areas where further work could be performed is thorough analysis of numbers both Small and Big Collection databases, Basic words database.
In addition, analysis routines could be enhanced by adding Bhattacharya scores, calculated in a different manner. In other words, potentially beneficial use of Bhattacharya coefficients would applying them to the same word from a different language group. As a result, the preservation of particular words could be analysed across language groups, enabling to compare and evaluate potential reasons behind it.
Moreover, regarding IPA-encoded data potential further work would be generating a customized IPA Phonetic Substitution table. Also, an important step towards receiving more accurate and interesting results would be augmenting the IPA databases with more languages.
Finally, classifying languages in language databases and automatically analysing purity of clusters would be a step forward towards fully automated and consistent process. Consequently, a list of 118 languages containing their language families and branches has been created. It could be incorporated with existing language databases.
Summary of contributions
My personal contributions during this undergraduate research assistantship include:
Summary of contributions ::: Data Collection.
Created a Sheep counting numbers database.
Made geographical data database and a map of dialects.
Collected colour words from 42 different languages and made a database. Made the following subsets: Indo-European, Germanic, Romance, Romance and Germanic.
Created numbers, colours and basic words databases in IPA encoding.
Made a list of 118 languages, their language families and branches.
Summary of contributions ::: Transforming data using phonetics.
Transformed sheep counting numbers, colours (including Indo-European, Germanic, Romance, Romance and Germanic subsets) databases using a specified phonetic encoding.
Summary of contributions ::: Mean, SD and density analysis.
Analysed mean, SD and density of sheep numbers, colours (including all subsets). Produced tables and plots.
Summary of contributions ::: T-Scores and Bhattacharya calculations.
Calculated T-Scores and Bhattacharya coefficients for sheep numbers, colours (including all subsets); Made dendrograms from Bhattacharya scores.
Summary of contributions ::: Hierarchical clustering.
Performed hierarchical clustering for sheep numbers, colours (all subsets), IPA (all three). Created dendrograms.
Performed hierarchical clustering with the best silhouette cut value for sheep numbers all to all, colours (all subsets), small numbers all to all. Made dendrograms, Silhouette plots, Cluster plots.
Performed hierarchical clustering with k=10 cut for numbers all to all, colours (all subsets), small numbers all to all. Made dendrograms, Silhouette plots, Cluster plots.
Summary of contributions ::: Code development.
Created a package in R “CompLinguistics”, which consisted of functions: “mean_SD”, “densityP”, “sMatrix”, “tscore”, “bhatt”, “silhouetteV”, “hcutVisual”.
Produced R script that automates the processes of file reading, generating a certain format data frame, performing hierarchical clustering with the best silhouette value cut. In addition, created another R script, which performed calculations of mean, standard deviation, Bhattacharya scores and analysis of distribution.
Several shellscrips.
“editableGaby” phonetic substitution table.
Acknowledgements
tocsectionAcknowledgements
Gabija Mikulyte was supported by an undergraduate research grant from the Department of Computer Science at Brunel University London.
Phonetic Substitution tables ::: Editable
This table was mostly used for calculations of pairwise linguistic distances. Symbol “%” indicates comments.
t(S1,S2,D):-
S1=S2 -> D=0 ; ( t1(S1,S2,D) -> true ; ( t1(S2,S1,D) -> true ; D=1)).
t1(b,p,D):- tweight(consonant1,D). t1(d,t,D):- tweight(consonant1,D). t1(g,k,D):- tweight(consonant1,D). t1(p,f,D):- tweight(consonant1,D). t1(t,'T',D):- tweight(consonant1,D). t1(k,'C',D):- tweight(consonant1,D). t1('C',h,D):- tweight(consonant1,D). t1(b,f,D):- tweight(consonant1x2,D). t1(d,'T',D):- tweight(consonant1x2,D). t1(g,'C',D):- tweight(consonant1x2,D). t1(g,h,D) :- tweight(consonant1x3,D). t1(f,v,D):- tweight(consonant1,D).
t1(g,j,D):- tweight(consonant1,D).
t1(s,z,D):- tweight(consonant1,D).
t1(v,w,D):- tweight(consonant1,D).
t1(f,w,D):- tweight(consonant1x2,D). t1('F',w,D):- tweight(consonant1x2,D).
t1(f,'F',0). t1('S','a',0). t1('C',' ',0). t1('T','¸',0).
t1('a',s,D):- tweight(consonant1,D). t1('S',s,D):- tweight(consonant1,D). t1('C','S',D):- tweight(consonant1,D). t1('C','a',D):- tweight(consonant1,D). t1(' ','S',D):- tweight(consonant1,D). t1(' ','a',D):- tweight(consonant1,D). t1('K',k,D):- tweight(consonant1,D). t1('G',k,D):- tweight(consonant1,D). t1('G',g,D):- tweight(consonant1,D). t1('K','G',D):- tweight(consonant1,D). t1('Z',z,D):- tweight(consonant1,D). t1(c,s,D):- tweight(consonant1,D). t1(x,k,D):- tweight(consonant1,D). t1('D',d,D):-tweight(consonant1,D). t1(a,Y,V):- (Y=e;Y='E';Y=i;Y='I';Y=o;Y='O';Y=u;Y='U';Y=y;Y='Y'), tweight(vowel,V).
t1(e,Y,V):- (Y=a;Y='A';Y=i;Y='I';Y=o;Y='O';Y=u;Y='U';Y=y;Y='Y'), tweight(vowel,V).
t1(i,Y,V):- (Y=a;Y='A';Y=e;Y='E';Y=o;Y='O';Y=u;Y='U';Y=y;Y='Y'), tweight(vowel,V).
t1(o,Y,V):- (Y=a;Y='A';Y=e;Y='E';Y=i;Y='I';Y=u;Y='U';Y=y;Y='Y'), tweight(vowel,V).
t1(u,Y,V):- (Y=a;Y='A';Y=e;Y='E';Y=i;Y='I';Y=o;Y='O';Y=y;Y='Y'), tweight(vowel,V).
t1(y,Y,V):- (Y=a;Y='A';Y=e;Y='E';Y=i;Y='I';Y=o;Y='O';Y=u;Y='U'), tweight(vowel,V).
t1(A1,A2,0):- t_a(A1), t_a(A2).
t1(E1,E2,0):- t_e(E1), t_e(E2).
t1(I1,I2,0):- t_i(I1), t_i(I2).
t1(O1,O2,0):- t_o(O1), t_o(O2).
t1(U1,U2,0):- t_u(U1), t_u(U2).
t1(Y1,Y2,0):- t_y(Y1), t_y(Y2).
t1(X,Y,V):- tvowel(X), tvowel(Y), tweight(vowel,V).
t1('A',Y,V):- (Y='E';Y=e;Y='I';Y=i;Y='O';Y=o;Y='U';Y=u;Y='Y';Y=y), tweight(vowel,V).
t1('E',Y,V):- (Y='A';Y=a;Y='I';Y=i;Y='O';Y=o;Y='U';Y=u;Y='Y';Y=y), tweight(vowel,V).
t1('I',Y,V):- (Y='A';Y=a;Y='E';Y=e;Y='O';Y=o;Y='U';Y=u;Y='Y';Y=y), tweight(vowel,V).
t1('O',Y,V):- (Y='A';Y=a;Y='E';Y=e;Y='I';Y=i;Y='U';Y=u;Y='Y';Y=y), tweight(vowel,V).
t1('U',Y,V):- (Y='A';Y=a;Y='E';Y=e;Y='I';Y=i;Y='O';Y=o;Y='Y';Y=y), tweight(vowel,V).
t1('Y',Y,V):- (Y='A';Y=a;Y='E';Y=e;Y='I';Y=i;Y='O';Y=o;Y='U';Y=u), tweight(vowel,V).
t1('A',a,Z):- tweight(longvowel,Z).
t1('E',e,Z):- tweight(longvowel,Z).
t1('I',i,Z):- tweight(longvowel,Z).
t1('O',o,Z):- tweight(longvowel,Z).
t1('U',u,Z):- tweight(longvowel,Z).
t1('Y',y,Z):- tweight(longvowel,Z).
t1('M',m,Z):- tweight(longconsonant,Z).
t1('N',n,Z):- tweight(longconsonant,Z).
tweight(vowel,0.2).
tweight(longvowel,0.1).
tweight(consonant1,0.2).
tweight(consonant1x2,0.4).
tweight(consonant1x3,0.8).
tweight(longconsonant,0.05).
tvowel(V):- t_a(V); t_e(V); t_i(V); t_o(V); t_u(V); t_y(V).
Phonetic Substitution tables ::: editableGaby
This table was created based on Editable illustrated before. Comments and “!!” symbol indicates where changes were made.
t(S1,S2,D):-
S1=S2 -> D=0 ; ( t1(S1,S2,D) -> true ; ( t1(S2,S1,D) -> true ; D=1)).
/*
Phonetic encoding
c - ts
x - ks
C - ch as in charity
k - as in cat
T - th
S - sh
G - dzh K - kh
Z - zh
D - dz
H - spanish/portuguese sound of 'j' F - ph
A,I,O,U,Y - long vowels
*/
t1(b,p,D):- tweight(consonant1,D). t1(d,t,D):- tweight(consonant1,D). t1(g,k,D):- tweight(consonant1,D). t1(p,f,D):- tweight(consonant1,D). t1(t,'T',D):- tweight(consonant1,D). t1(k,'C',D):- tweight(consonant1x2,D). t1('C',h,D):- tweight(consonant1x2,D). t1(b,f,D):- tweight(consonant1x2,D). t1(d,'T',D):- tweight(consonant1x2,D). t1(g,'C',D):- tweight(consonant1x2,D). t1(g,h,D) :- tweight(consonant1x1,D). t1(f,v,D):- tweight(consonant1,D).
t1(g,j,D):- tweight(consonant1,D). t1(s,z,D):- tweight(consonant1,D).
t1(v,w,D):- tweight(consonant1,D).
t1(f,w,D):- tweight(consonant1x2,D). t1('F',w,D):- tweight(consonant1x2,D).
t1(f,'F',0). t1('S','a',0). t1('C',' ',0). t1('T','¸',0).
t1('a',s,D):- tweight(consonant1,D). t1('S',s,D):- tweight(consonant1,D). t1('C','S',D):- tweight(consonant1,D). t1('C','a',D):- tweight(consonant1,D). t1(' ','S',D):- tweight(consonant1,D). t1(' ','a',D):- tweight(consonant1,D). t1('K',k,D):- tweight(consonant1,D). t1('K',g,D):- tweight(consonant1,D). t1('G','Z',D):- tweight(consonant1,D). t1('G','C',D):- tweight(consonant1,D). t1('K','G',D):- tweight(consonant1,D). t1('Z',z,D):- tweight(consonant1,D). t1('Z',s,D):- tweight(consonant1x2,D). t1(c,s,D):- tweight(consonant1,D). t1(x,k,D):- tweight(consonant1,D). t1('D',d,D):-tweight(consonant1,D). t1('K',g,D):-tweight(consonant1,D). t1('H','K',D):-tweight(consonant1,D). t1('H',g,D):-tweight(consonant1,D). t1('H',k,D):-tweight(consonant1,D). t1('H',h,D):-tweight(consonant1,D).
t1(a,Y,V):- (Y=e;Y='E';Y=i;Y='I';Y=o;Y='O';Y=u;Y='U';Y=y;Y='Y'), tweight(vowel,V).
t1(e,Y,V):- (Y=a;Y='A';Y=i;Y='I';Y=o;Y='O';Y=u;Y='U';Y=y;Y='Y'), tweight(vowel,V).
t1(i,Y,V):- (Y=a;Y='A';Y=e;Y='E';Y=o;Y='O';Y=u;Y='U';Y=y;Y='Y'), tweight(vowel,V).
t1(o,Y,V):- (Y=a;Y='A';Y=e;Y='E';Y=i;Y='I';Y=u;Y='U';Y=y;Y='Y'), tweight(vowel,V).
t1(u,Y,V):- (Y=a;Y='A';Y=e;Y='E';Y=i;Y='I';Y=o;Y='O';Y=y;Y='Y'), tweight(vowel,V).
t1(y,Y,V):- (Y=a;Y='A';Y=e;Y='E';Y=i;Y='I';Y=o;Y='O';Y=u;Y='U'), tweight(vowel,V).
t1(A1,A2,0):- t_a(A1), t_a(A2).
t1(E1,E2,0):- t_e(E1), t_e(E2).
t1(I1,I2,0):- t_i(I1), t_i(I2).
t1(O1,O2,0):- t_o(O1), t_o(O2).
t1(U1,U2,0):- t_u(U1), t_u(U2).
t1(Y1,Y2,0):- t_y(Y1), t_y(Y2).
t1(X,Y,V):- tvowel(X), tvowel(Y), tweight(vowel,V).
t1('A',Y,V):- (Y='E';Y=e;Y='I';Y=i;Y='O';Y=o;Y='U';Y=u;Y='Y';Y=y), tweight(vowel,V).
t1('E',Y,V):- (Y='A';Y=a;Y='I';Y=i;Y='O';Y=o;Y='U';Y=u;Y='Y';Y=y), tweight(vowel,V).
t1('I',Y,V):- (Y='A';Y=a;Y='E';Y=e;Y='O';Y=o;Y='U';Y=u;Y='Y';Y=y), tweight(vowel,V).
t1('O',Y,V):- (Y='A';Y=a;Y='E';Y=e;Y='I';Y=i;Y='U';Y=u;Y='Y';Y=y), tweight(vowel,V).
t1('U',Y,V):- (Y='A';Y=a;Y='E';Y=e;Y='I';Y=i;Y='O';Y=o;Y='Y';Y=y), tweight(vowel,V).
t1('Y',Y,V):- (Y='A';Y=a;Y='E';Y=e;Y='I';Y=i;Y='O';Y=o;Y='U';Y=u), tweight(vowel,V).
t1('A',a,Z):- tweight(longvowel,Z).
t1('E',e,Z):- tweight(longvowel,Z).
t1('I',i,Z):- tweight(longvowel,Z).
t1('O',o,Z):- tweight(longvowel,Z).
t1('U',u,Z):- tweight(longvowel,Z).
t1('Y',y,Z):- tweight(longvowel,Z).
t1('M',m,Z):- tweight(longconsonant,Z).
t1('N',n,Z):- tweight(longconsonant,Z).
tweight(vowel,0.2).
tweight(longvowel,0.1).
tweight(consonant1,0.2).
tweight(consonant1x2,0.4).
tweight(consonant1x3,0.8).
tweight(longconsonant,0.05).
tvowel(V):- t_a(V); t_e(V); t_i(V); t_o(V); t_u(V); t_y(V).
Dendrograms and Cluster plots ::: Sheep counting systems
Figures FIGREF121 and FIGREF122.
Dendrograms and Cluster plots ::: Dendrograms of Bhattacharya scores of colour words
Figures FIGREF124, FIGREF125, FIGREF126, FIGREF127 and FIGREF128. | Unanswerable |
a71ebd8dc907d470f6bd3829fa949b15b29a0631 | a71ebd8dc907d470f6bd3829fa949b15b29a0631_0 | Q: how did they ask if a tweet was racist?
Text: 1.1em
Stéphan Tulkens, Lisa Hilte, Elise Lodewyckx, Ben Verhoeven, Walter Daelemans
CLiPS Research Center, University of Antwerp
Prinsstraat 13, 2000, Antwerpen, Belgium
{stephan.tulkens, lisa.hilte, ben.verhoeven, walter.daelemans}@uantwerpen.be,
[email protected]
We present a dictionary-based approach to racism detection in Dutch social media comments, which were retrieved from two public Belgian social media sites likely to attract racist reactions. These comments were labeled as racist or non-racist by multiple annotators. For our approach, three discourse dictionaries were created: first, we created a dictionary by retrieving possibly racist and more neutral terms from the training data, and then augmenting these with more general words to remove some bias. A second dictionary was created through automatic expansion using a word2vec model trained on a large corpus of general Dutch text. Finally, a third dictionary was created by manually filtering out incorrect expansions. We trained multiple Support Vector Machines, using the distribution of words over the different categories in the dictionaries as features. The best-performing model used the manually cleaned dictionary and obtained an F-score of 0.46 for the racist class on a test set consisting of unseen Dutch comments, retrieved from the same sites used for the training set. The automated expansion of the dictionary only slightly boosted the model's performance, and this increase in performance was not statistically significant. The fact that the coverage of the expanded dictionaries did increase indicates that the words that were automatically added did occur in the corpus, but were not able to meaningfully impact performance. The dictionaries, code, and the procedure for requesting the corpus are available at: https://github.com/clips/hades.
Racism, word2vec, Dictionary-based Approaches, Computational Stylometry
Introduction
Racism is an important issue which is not easily defined, as racist ideas can be expressed in a variety of ways. Furthermore, there is no clear definition of what exactly constitutes a racist utterance; what is racist to one person is highly likely to not be considered racist universally. Additionally, although there exist mechanisms for reporting acts of racism, victims often neglect to do so as they feel that reporting the situation will not solve anything, according to Unia, the Belgian Interfederal Centre for Equal Opportunities. The scope of this issue, however, is currently unknown. Hence, the goal of our system is two-fold: it can be used to shed light on how many racist remarks are not being reported online, and furthermore, the automated detection of racism could provide interesting insights in the linguistic mechanisms used in racist discourse.
In this study, we try to automatically detect racist language in Dutch social media comments, using a dictionary-based approach. We retrieved and annotated comments from two public social media sites which were likely to attract racist reactions according to Unia. We use a Support Vector Machine to automatically classify comments, using handcrafted dictionaries, which were later expanded using automated techniques, as features.
We first discuss previous research on our subject and methodology, and discuss the problem of defining racist language (section "Annotation Style" ). Next, we describe our data (section "Datasets and Annotations" ). Finally, after discussing the experimental setup (section "Experimental Setup" ), we present our results (section "Results and Discussion" ).
Related Research
The classification of racist insults presents us with the problem of giving an adequate definition of racism. More so than in other domains, judging whether an utterance is an act of racism is highly personal and does not easily fit a simple definition. The Belgian anti-racist law forbids discrimination, violence and crime based on physical qualities (like skin color), nationality or ethnicity, but does not mention textual insults based on these qualities. Hence, this definition is not adequate for our purposes, since it does not include the racist utterances one would find on social media; few utterances that people might perceive as racist are actually punishable by law, as only utterances which explicitly encourage the use of violence are illegal. For this reason, we use a common sense definition of racist language, including all negative utterances, negative generalizations and insults concerning ethnicity, nationality, religion and culture. In this, we follow paolo2015racist, bonilla2002linguistics and razavi2010offensive, who show that racism is no longer strictly limited to physical or ethnic qualities, but can also include social and cultural aspects.
Additionally, several authors report linguistic markers of racist discourse; vandijk reports that the number of available topics is greatly restricted when talking about foreigners. paolo2015racist, who performed a qualitative study of posts from Italian social media sites, shows that these chosen topics are typically related to migration, crime and economy. Furthermore, the use of stereotypes and prejudiced statements BIBREF0 , BIBREF1 , as well as a heightened occurrence of truth claims BIBREF2 , BIBREF3 , are reported as typical characteristics of racist discourse . Finally, racist utterances are said to contain specific words and phrases, i.e. n-grams, significantly more often than neutral texts, like “our own kind” and “white civilization” BIBREF2 , BIBREF3 .
Stylistically, racist discourse is characterized by a higher rate of certain word classes, like imperatives and adjectives and a higher noun-adjective ratio BIBREF4 , BIBREF2 , BIBREF3 . Greevy and Smeaton also report a more frequent use of modals and adverbs, which they link to the higher frequency of truth claims in racist utterances BIBREF2 , BIBREF3 . In several studies, pronoun use is reported as an important feature in the detection of racist language. While paolo2015racist reports a high frequency of (especially first person plural) pronouns in racist data, vandijk reports a more general finding: the importance of us and them constructions in racist discourse. He explains that they involve a `semantic move with a positive part about Us and a negative part about Them' BIBREF5 . Using such constructions, one linguistically emphasizes - either deliberately or subconsciously - a divide between groups of people. A strict interpretation implies that even positive utterances about `them' can be perceived as racist, as they can also imply a divide between us and them. In this sense, Van Dijk's definition of racism is subtler, but also broader, than the definition used in our own research: we only count negative utterances and generalizations about groups of people as racist.
Our dictionary-based approach is inspired by methods used in previous research, like LIWC (Linguistic Inquiry and Word Count) BIBREF6 . LIWC is a dictionary-based computational tool that counts word frequencies for both grammatical categories (e.g. pronouns) and content-related categories (e.g. negative emotion words). As LIWC uses counts per category instead of individual words' frequencies, it allows for broader generalizations on functionally or semantically related words.
The construction of dictionary categories related to racist discourse (cf. section "Dictionaries" ) is largely based on linguistic properties of racist language reported in earlier work (see above). Additionally, the categories were adjusted to fit the corpus used in the research, which differs from corpora used in other studies. As our corpus is retrieved from social media sites with an anti-Islamic orientation, we added categories to reflect anti-religious sentiment. The relevant features in this study therefore differ from those reported in other studies, as different words are used to insult different groups of people BIBREF3 .
Finally, some other successful quantitative approaches to racism detection that have been used in earlier studies are a bag of words (BoW) approach as well as the analysis of part-of-speech (PoS) tags BIBREF2 , BIBREF3 . We leave the addition of these features to future work.
Datasets and Annotations
In this section, we describe our data collection, our annotation guidelines ( "Annotation Style" ) and the results of our annotations ( "Conclusions and Future Work" and "Test data" ).
For our current research we collected a corpus of social media comments, consisting of comments retrieved from Facebook sites which were likely to attract racist reactions in their comments. We specifically targeted two sites: the site of a prominent Belgian anti-Islamic organization, and the site of a Belgian right-wing organization. In both cases the Facebook sites were officially condoned by the organizations, and in the first case served as a communication platform to organize political gatherings. While both sites, the former more than the latter, explicitly profess to be non-racist, the comments they attracted were still highly critical of foreigners and, predictably, Muslims. This is also the reason we mined comments from these sites, and not the posts themselves. While the narrow focus of the sites introduces bias into our data, as the opinions of the people visiting these sites will not reflect the opinions of the general population, they do contain a good proportion of racist to non-racist data.
Annotation Style
We annotated the retrieved comments with three different labels: `racist', `non-racist' and `invalid'.
The `racist' label describes comments that contain negative utterances or insults about someone's ethnicity, nationality, religion or culture. This definition also includes utterances which equate, for example, an ethnic group to an extremist group, as well as extreme generalizations. The following examples are comments that were classified as racist:
Het zijn precies de vreemden die de haat of het racisme opwekken bij de autochtonen.
It is the foreigners that elicit hate and racism from natives.
Kan je niets aan doen dat je behoort tot het ras dat nog minder verstand en gevoelens heeft in uw hersenen dan het stinkend gat van een VARKEN ! :-p
You cannot help the fact that you belong to the race that has less intellect and sense in their brains than the smelly behind of a PIG! :-P
Wil weer eens lukken dat wij met het vuilste krapuul zitten, ik verschiet er zelfs niet van!
Once again we have to put up with the filthiest scum, it doesn't even surprise me anymore!
The label `invalid' was used for comments that were written in languages other than Dutch, or that did not contain any textual information, i.e. comments that solely consist of pictures or links. Before classification, we excluded these from both our training and test set.
The final label, `non-racist', was the default label. If a comment was valid, but could not be considered racist according to our definition, this was the label we used.
Training Data
To collect the training data, we used Pattern BIBREF7 to scrape the 100 most recent posts from both sites, and then extracted all comments which reacted to these comments. This resulted in 5759 extracted comments: 4880 from the first site and 879 from the second site. The second site attracted a lot less comments on each post, possibly because the site posted more frequently. In addition to this, the organization behind the first site had been figuring prominently in the news at the time of extraction, which might explain the divide in frequency of comments between the two sites. The corpus was annotated by two annotators, who were both students of comparable age and background. When A and B did not agree on a label, a third annotator, C, was used as a tiebreaker in order to obtain gold-standard labels. Table 1 shows the gold standard for the training set.
We calculated inter-annotator agreement using the Kappa score ( $\kappa $ ) BIBREF8 . On the training corpus, the agreement score was $\kappa $ = 0.60. Annotator A used the racist tag much less often than annotator B. Interestingly, the agreement remains relatively high; 79% of the comments that A annotated as racist were also annotated as racist by B. Even though B was much more inclined to call utterances racist, A and B still shared a common ground regarding their definition of racism. Examining the comments in detail, we found that the difference can largely be explained by sensitivity to insults and generalizations, as example 4 shows.
Oprotten die luizegaards [sic] !!!
Throw those lice carriers out!
While annotator B considers this utterance to be racist, annotator A does not, as it does not contain a specific reference to an ethnicity, nationality or religion. That is, when not seen in the context of this specific annotation task this sentence would not necessarily be called racist, just insulting.
Test data
The test corpus was mined in the same way as the training set, at a different point in time. We mined the first 500 and first 116 comments from the first and second site, respectively, which makes the proportion between sites more or less identical to the the proportions in the train corpus. The annotation scheme was identical to the one for the train set, with the difference that C, who previously performed the tiebreak, now became a regular annotator. The first 25% of each batch of comments, i.e. 125 comments for the first site and 30 comments for the second site, were annotated by all three annotators to compute inter-annotator agreement. The remaining comments were equally divided among annotators. The annotator agreement was $\kappa $ = 0.54 (pairwise average), which is lower than the agreement on the training data. The reason for the lower agreement was that annotator C often did not agree with A and B. Because the pattern of mismatches between the annotators is quite regular, we will now discuss some of the annotations in detail:
we kunnen niet iedereen hier binnen laten want dat betekend [sic] het einde van de europese beschaving We cannot let everyone in because that will mean the end of European civilization
Eigen volk gaat voor, want die vuile manieren van de EU moeten wij vanaf. Geen EU en geen VN. Waardeloos en tegen onze mensen. (eigen volk.)
Put our own people first, because we need to get rid of the foul manners of the EU. No EU nor UN. Useless and against our people. (own folk.)
Burgemeester Termont is voor de zwartzakken die kiezen voor hem
Mayor Termont supports the black sacks, as they vote for him
Annotator C used the `racist' tag more often, which is probably due to the fact that he consistently annotated overt ideological statements related to immigration as `racist', while the other annotators did not. The three examples mentioned above are utterances that C classified as `racist', but A and B classified as `not racist'.
The cause of these consistent differences in annotations might be cultural, as C is from the southern part of the Netherlands, whereas A and B are native to the northern part of Belgium. Some terms are simply misannotated by C because they are Flemish vernacular expressions. For example, zwartzak [black sack], from sentence 7, superficially looks like a derogatory term for a person of color, but actually does not carry this meaning, as it is a slang word for someone who collaborated with the German occupying forces in the Second World War. While this could still be classified as being racist, the point is that C only registered this as a slang word based on skin color, and not a cultural or political term. Finally, it is improbable that the cause of these mismatches is annotator training, as A and B did not discuss their annotations during the task. In addition to this, C functioned as a tiebreaker in the first dataset, and thus already had experience with the nature of the training material.
Experimental Setup
In this section, we describe our experimental setup. We will first discuss our dictionary-based approach, describing both the LIWC dictionary we used as well as the construction of dictionaries related to racist discourse (section "Dictionaries" ). Next, we will describe the preprocessing of the data (section "Preprocessing and Featurization" ).
Dictionaries
In our classification task, we will use the LIWC dictionaries for Dutch BIBREF9 . We hypothesize that some of LIWC's word categories can be useful in detecting (implicit) racist discourse, as some of these categories are associated with markers of racist discourse reported in previous research (cf. section "Annotation Style" ), including pronouns, negative emotion words, references to others, certainty, religion and curse words.
In addition to the Dutch LIWC data, we created a dictionary containing words that specifically relate to racist discourse. We expect a dictionary-based approach in which words are grouped into categories to work well in this case because many of the racist terms used in our corpus were neologisms and hapaxes, like halalhoer (halal prostitute). Alternatively, existing terms are often reused in a ridiculing fashion, e.g. using the word mossel (mussel) to refer to Muslims. The dictionary was created as follows: after annotation, terms pertaining to racist discourse were manually extracted from the training data. These were then grouped into different categories, where most categories have both a neutral and a negative subcategory. The negative subcategory contains explicit insults, while the neutral subcategory contains words that are normally used in a neutral fashion, e.g. zwart (black), Marokkaan (Moroccan), but which might also be used in a more implicit racist discourse; e.g. people that often talk about nationalities or skin color might be participating in a racist us and them discourse. An overview of the categories can be found in Table 2 .
After creating the dictionary, we expanded these word lists both manually and automatically. First, we manually added an extensive list of countries, nationalities and languages, to remove some of the bias present in our training corpus. To combat sparsity, and to catch productive compounds which are likely to be used in a racist manner, we added wildcards to the beginning or end of certain words. We used two different wildcards. * is an inclusive wildcard; it matches the word with or without any affixes, e.g. moslim* matches both moslim (Muslim) and moslims (Muslims). + is an exclusive wildcard; it only matches words when an affix is attached, e.g. +moslim will match rotmoslim (Rotten Muslim) but not moslim by itself. In our corpus (which is skewed towards racism), the + will almost always represent a derogatory prefix, which is why it figures more prominently in the negative part of our dictionary.
A downside of using dictionaries for the detection of racism, is that they do not include a measure of context. Therefore, a sentence such as “My brother hated the North African brown rice and lentils we made for dinner” will be classified as racist, regardless of the fact that the words above do not occur in a racist context. Approaches based on word unigrams or bigrams face similar problems. This problem is currently partially absolved by the fact that we are working with a corpus skewed towards racism: words like `brown' and `African' are more likely to be racist words in our corpus than in general text.
To broaden the coverage of the categories in our dictionary, we performed dictionary expansion on both the neutral and the negative categories using word2vec BIBREF10 . word2vec is a collection of models capable of capturing semantic similarity between words based on the sentential contexts in which these words occur. It does so by projecting words into an n-dimensional space, and giving words with similar contexts similar places in this space. Hence, words which are closer to each other as measured by cosine distance, are more similar. Because we observed considerable semantic variation in the insults in our corpus, we expect that dictionary expansion using word2vec will lead to the extraction of previously unknown insults, as we assume that similar insults are used in similar contexts. In parallel, we know that a lot of words belonging to certain semantic categories, such as diseases and animals, can almost invariably be used as insults.
The expansion proceeded as follows: for each word in the dictionary, we retrieved the five closest words, i.e. the five most similar words, in the n-dimensional space, and added these to the dictionary. Wildcards were not taken into account for this task, e.g. *jood was replaced by jood for the purposes of expansion. As such, the expanded words do not have any wildcards attached to them. For expansion we used the best-performing model from tulkens2016, which is based on a corpus of 3.9 billion words of general Dutch text. Because this word2vec model was trained on general text, the semantic relations contained therein are not based on racist or insulting text, which will improve the coverage of our expanded categories.
After expansion, we manually searched the expanded dictionaries and removed obviously incorrect items. Because the word2vec model also includes some non-Dutch text, e.g. Spanish, some categories were expanded incorrectly. As a result, we have 3 different dictionaries with which we perform our experiments: the original dictionary which was based on the training data, a version which was expanded using word2vec, and a cleaned version of this expanded version. The word frequencies of the dictionaries are given in Table 3 . An example of expansion is given in Table 4 .
Preprocessing and Featurization
For preprocessing, the text was first tokenized using the Dutch tokenizer from Pattern BIBREF7 , and then lowercased and split on whitespace, which resulted in lists of words which are appropriate for lexical processing.
Our dictionary-based approach, like LIWC, creates an n-dimensional vector of normalized and scaled numbers, where n is the number of dictionary categories. These numbers are obtained by dividing the frequency of words in every specific category by the total number of words in the comment. Because all features are already normalized and scaled, there was no need for further scaling. Furthermore, because the number of features is so small, we did not perform explicit feature selection.
Performance on the Training Set
We estimated the optimal values for the SVM parameters by an exhaustive search through the parameter space, which led to the selection of an RBF kernel with a C value of 1 and a gamma of 0. For the SVM and other experiments, we used the implementation from Scikit-Learn BIBREF11 . Using cross-validation on the training data, all dictionary-based approaches with lexical categories related to racist discourse significantly outperformed models using only LIWC's general word categories. Since the current research concerns the binary classification of racist utterances, we only report scores for the positive class, i.e. the racist class. When only LIWC-categories were used as features, an F-score of 0.34 (std. dev. 0.07) was obtained for the racist class. When using the original discourse dictionary, we reached an F-score of 0.50 (std. dev. 0.05). Automatic expansion of the categories did not influence performance either (F-score 0.50, std. dev. 0.05). Similar results (0.49 F-score, std. dev. 0.05) were obtained when the expanded racism dictionaries were manually filtered. This result is not surprising, as the original dictionaries were created from the training data, and might form an exhaustive catalog of racist terms in the original corpus.
Combining the features generated by LIWC with the specific dictionary-based features led to worse results compared to the dictionary-based features by themselves (F-score 0.40, std. dev. 0.07 for the best-performing model).
Finally, all models based on the dictionary features as well as the combined model outperformed a unigram baseline of 0.36, but the LIWC model did not. We also report a weighted random baseline (WRB), which was outperformed by all models.
Testing the Effect of Expansion
As seen above, the performance of the different models on the train set was comparable, regardless of their expansion. This is due to the creation procedure for the dictionary: because the words in the original dictionary were directly retrieved from the training data, the expanded and cleaned versions might not be able to demonstrate their generalization performance, as most of the racist words from the training data will be included in the original dictionaries as well as the expanded dictionaries. This artifact might disappear in the test set, which was retrieved from the same two sites, but will most likely contain unseen words. These unseen words will not be present in the original dictionary, but could be present in the expanded version.
As Table 6 shows, the models obtain largely comparable performance on the test set, and outperform the unigram baseline by a wide margin.
In comparison to previous research, our approach leads to worse results than those of greevy2004text, who report a precision score of 0.93 and a recall score of 0.87, using an SVM with BOW features together with frequency-based term weights. It is, however, difficult to compare these scores to our performance, given that the data, method, and language differ.
Our best-performing model was based on the expanded and cleaned version of the dictionary, but this model only slightly outperformed the other models. Additionally, we also computed Area Under the Receiving Operator Characteristic Curve (ROC-AUC) scores for all models, also shown in Table 6 . ROC-AUC shows the probability of ranking a randomly chosen positive instance above a randomly chosen negative instance, thereby giving an indication of the overall performance of the models. This shows that all dictionaries have comparable AUC scores, and that each dictionary outperforms the unigram baseline. To obtain additional evidence, we computed the statistical significance of performance differences between the models based on the dictionaries and unigram baseline model using approximate randomization testing (ART) BIBREF12 . An ART test between dictionary models reveals that none of the models had performance differences that were statistically significant. Similarly, all dictionary models outperformed the unigram baseline with statistical significance, with $p$ $<$ 0.01 for the models based on the cleaned and expanded dictionaries, and $p$ $<$ 0.05 for the models based on the original dictionary.
To get more insight into why the expanded models were not more successful, we calculated dictionary coverage for every dictionary separately on the test set. If the expanded dictionaries do not have increased coverage, the reason for their similar performance is clear: not enough words have been added to affect the performance in any reasonable way. As Table 7 indicates, the coverage of the expanded dictionaries did increase, which indicates that the automated expansion, or manual deletion for that matter, contrary to expectations, did not add words that were useful for the classification of racist content. To obtain additional evidence for this claim, we looked at the number of comments that contained words from the original, cleaned and expanded dictionaries. The coverage in terms of total comments also increased, as well as the absolute number of racist comments that contained the added terms. Because the coverage in number of comments did not increase the performance of the dictionaries, we hypothesize that the terms that were included in the expanded dictionaries were not distributed clearly enough (over racist and neutral texts) to make a difference in the performance on the classification task.
Conclusions and Future Work
We developed a dictionary-based computational tool for automatic racism detection in Dutch social media comments. These comments were retrieved from public social media sites with an anti-Islamic orientation. The definition of racism we used to annotate the comments therefore includes religious and cultural racism as well, a phenomenon reported on in different studies BIBREF4 , BIBREF13 , BIBREF14 .
We use a Support Vector Machine to classify comments as racist or not based on the distribution of the comments' words over different word categories related to racist discourse. To evaluate the performance, we used our own annotations as gold standard. The best-performing model obtained an F-score of 0.46 for the racist class on the test set, which is an acceptable decrease in performance compared to cross-validation experiments on the training data (F-score 0.49, std. dev. 0.05). The dictionary used by the model was manually created by retrieving possibly racist and more neutral terms from the training data during annotation. The dictionary was then manually expanded, automatically expanded with a word2vec model and finally manually cleaned, i.e. irrelevant terms that were added automatically were removed. It did not prove useful to use general stylistic or content-based word categories along with the word lists specifically related to racist discourse.
Surprisingly, the expansion of the manually crafted dictionary did not boost the model's performance significantly. In (cross-validated) experiments on the training data, this makes sense, as the words in the different categories are retrieved from the training data itself, artificially making the dictionary very appropriate for the task. In the test runs, however, a better result could be expected from the generalized word lists. The expanded versions of the dictionary had higher overall coverage for the words in the corpus, as well as higher coverage in number of comments and in number of racist comments. This shows that the words that were automatically added, did indeed occur in our corpus. As the model's performance more or less stagnated when using the expanded categories compared to the original ones, we hypothesize that the terms that were automatically added by the word2vec model were irrelevant to the task of discriminating between racist and neutral texts.
In terms of future work, we will expand our research efforts to include more general social media text. Because we currently only use material which was gathered from sites skewed towards racism, the performance of our dictionary might have been artificially heightened, as the words in the dictionary only occur in racist contexts in our corpus. Therefore, including more general social media texts will serve as a good test of the generality of our dictionaries with regards to detecting insulting material.
Acknowledgments
We are very grateful towards Leona Erens and François Deleu from Unia for wanting to collaborate with us and for pointing us towards the necessary data. We thank the three anonymous reviewers for their helpful comments and advice.
Supplementary Materials
The supplementary materials are available at https://github.com/clips/hades | if it includes negative utterances, negative generalizations and insults concerning ethnicity, nationality, religion and culture. |
1546356a8c5893dc2d298dcbd96d0307731dd54d | 1546356a8c5893dc2d298dcbd96d0307731dd54d_0 | Q: What other cross-lingual approaches is the model compared to?
Text: Introduction
Morphological analysis (hajivc1998tagging, oflazer1994tagging, inter alia) is the task of predicting fine-grained annotations about the syntactic properties of tokens in a language such as part-of-speech, case, or tense. For instance, in Figure FIGREF2 , the given Portuguese sentence is labeled with the respective morphological tags such as Gender and its label value Masculine.
The accuracy of morphological analyzers is paramount, because their results are often a first step in the NLP pipeline for tasks such as translation BIBREF1 , BIBREF2 and parsing BIBREF3 , and errors in the upstream analysis may cascade to the downstream tasks. One difficulty, however, in creating these taggers is that only a limited amount of annotated data is available for a majority of the world's languages to learn these morphological taggers. Fortunately, recent efforts in morphological annotation follow a standard annotation schema for these morphological tags across languages, and now the Universal Dependencies Treebank BIBREF0 has tags according to this schema in 60 languages.
cotterell2017crossling have recently shown that combining this shared schema with cross-lingual training on a related high-resource language (HRL) gives improved performance on tagging accuracy for low-resource languages (LRLs). The output space of this model consists of tag sets such as {POS: Adj, Gender: Masc, Number: Sing}, which are predicted for a token at each time step. However, this model relies heavily on the fact that the entire space of tag sets for the LRL must match those of the HRL, which is often not the case, either due to linguistic divergence or small differences in the annotation schemes between the two languages. For instance, in Figure FIGREF2 “refrescante” is assigned a gender in the Portuguese UD treebank, but not in the Spanish UD treebank.
In this paper, we propose a method that instead of predicting full tag sets, makes predictions over single tags separately but ties together each decision by modeling variable dependencies between tags over time steps (e.g. capturing the fact that nouns frequently occur after determiners) and pairwise dependencies between all tags at a single time step (e.g. capturing the fact that infinitive verb forms don't have tense). The specific model is shown in Figure FIGREF4 , consisting of a factorial conditional random field (FCRF; sutton2007dynamic) with neural network potentials calculated by long short-term memory (LSTM; BIBREF4 ) at every variable node (§ SECREF3 ). Learning and inference in the model is made tractable through belief propagation over the possible tag combinations, allowing the model to consider an exponential label space in polynomial time (§ SECREF24 ).
This model has several advantages:
In the following sections, we describe the model and these results in more detail.
Problem Formulation
Formally, we define the problem of morphological analysis as the task of mapping a length- INLINEFORM0 string of tokens INLINEFORM1 into the target morphological tag sets for each token INLINEFORM2 . For the INLINEFORM3 th token, the target label INLINEFORM4 defines a set of tags (e.g. {Gender: Masc, Number: Sing, POS: Verb}). An annotation schema defines a set INLINEFORM5 of INLINEFORM6 possible tag types and with the INLINEFORM7 th type (e.g. Gender) defining its set of possible labels INLINEFORM8 (e.g. {Masc, Fem, Neu}) such that INLINEFORM9 . We must note that not all tags or attributes need to be specified for a token; usually, a subset of INLINEFORM10 is specified for a token and the remaining tags can be treated as mapping to a INLINEFORM11 value. Let INLINEFORM12 denote the set of all possible tag sets.
Baseline: Tag Set Prediction
Data-driven models for morphological analysis are constructed using training data INLINEFORM0 consisting of INLINEFORM1 training examples. The baseline model BIBREF5 we compare with regards the output space of the model as a subset INLINEFORM2 where INLINEFORM3 is the set of all tag sets seen in this training data. Specifically, they solve the task as a multi-class classification problem where the classes are individual tag sets. In low-resource scenarios, this indicates that INLINEFORM4 and even for those tag sets existing in INLINEFORM5 we may have seen very few training examples. The conditional probability of a sequence of tag sets given the sentence is formulated as a 0th order CRF. DISPLAYFORM0
Instead, we would like to be able to generate any combination of tags from the set INLINEFORM0 , and share statistical strength among similar tag sets.
A Relaxation: Tag-wise Prediction
As an alternative, we could consider a model that performs prediction for each tag's label INLINEFORM0 independently. DISPLAYFORM0
This formulation has an advantage: the tag-predictions within a single time step are now independent, it is now easy to generate any combination of tags from INLINEFORM0 . On the other hand, now it is difficult to model the interdependencies between tags in the same tag set INLINEFORM1 , a major disadvantage over the previous model. In the next section, we describe our proposed neural factor graph model, which can model not only dependencies within tags for a single token, but also dependencies across time steps while still maintaining the flexibility to generate any combination of tags from INLINEFORM2 .
Neural Factor Graph Model
Due to the correlations between the syntactic properties that are represented by morphological tags, we can imagine that capturing the relationships between these tags through pairwise dependencies can inform the predictions of our model. These dependencies exist both among tags for the same token (intra-token pairwise dependencies), and across tokens in the sentence (inter-token transition dependencies). For instance, knowing that a token's POS tag is a Noun, would strongly suggest that this token would have a INLINEFORM0 label for the tag Tense, with very few exceptions BIBREF6 . In a language where nouns follow adjectives, a tag set prediction {POS: Adj, Gender: Fem} might inform the model that the next token is likely to be a noun and have the same gender. The baseline model can not explicitly model such interactions given their factorization in equation EQREF10 .
To incorporate the dependencies discussed above, we define a factorial CRF BIBREF7 , with pairwise links between cotemporal variables and transition links between the same types of tags. This model defines a distribution over the tag-set sequence INLINEFORM0 given the input sentence INLINEFORM1 as, DISPLAYFORM0
where INLINEFORM0 is the set of factors in the factor graph (as shown in Figure FIGREF4 ), INLINEFORM1 is one such factor, and INLINEFORM2 is the assignment to the subset of variables neighboring factor INLINEFORM3 . We define three types of potential functions: neural INLINEFORM4 , pairwise INLINEFORM5 , and transition INLINEFORM6 , described in detail below.
Neural Factors
The flexibility of our formulation allows us to include any form of custom-designed potentials in our model. Those for the neural factors have a fairly standard log-linear form, DISPLAYFORM0
except that the features INLINEFORM0 are themselves given by a neural network. There is one such factor per variable. We obtain our neural factors using a biLSTM over the input sequence INLINEFORM1 , where the input word embedding for each token is obtained from a character-level biLSTM embedder. This component of our model is similar to the model proposed by BIBREF5 . Given an input token INLINEFORM2 , we compute an input embedding INLINEFORM3 as, DISPLAYFORM0
Here, INLINEFORM0 is a character-level LSTM function that returns the last hidden state. This input embedding INLINEFORM1 is then used in the biLSTM tagger to compute an output representation INLINEFORM2 . Finally, the scores INLINEFORM3 are obtained as, DISPLAYFORM0
We use a language-specific linear layer with weights INLINEFORM0 and bias INLINEFORM1 .
Pairwise Factors
As discussed previously, the pairwise factors are crucial for modeling correlations between tags. The pairwise factor potential for a tag INLINEFORM0 and tag INLINEFORM1 at timestep INLINEFORM2 is given in equation EQREF20 . Here, the dimension of INLINEFORM3 is INLINEFORM4 . These scores are used to define the neural factors as, DISPLAYFORM0
Transition Factors
Previous work has experimented with the use of a linear chain CRF with factors from a neural network BIBREF8 for sequence tagging tasks. We hypothesize that modeling transition factors in a similar manner can allow the model to utilize information about neighboring tags and capture word order features of the language. The transition factor for tag INLINEFORM0 and timestep INLINEFORM1 is given below for variables INLINEFORM2 and INLINEFORM3 . The dimension of INLINEFORM4 is INLINEFORM5 . DISPLAYFORM0
In our experiments, INLINEFORM0 and INLINEFORM1 are simple indicator features for the values of tag variables with no dependence on INLINEFORM2 .
Language-Specific Weights
As an enhancement to the information encoded in the transition and pairwise factors, we experiment with training general and language-specific parameters for the transition and the pairwise weights. We define the weight matrix INLINEFORM0 to learn the general trends that hold across both languages, and the weights INLINEFORM1 to learn the exceptions to these trends. In our model, we sum both these parameter matrices before calculating the transition and pairwise factors. For instance, the transition weights INLINEFORM2 are calculated as INLINEFORM3 .
Loopy Belief Propagation
Since the graph from Figure FIGREF4 is a loopy graph, performing exact inference can be expensive. Hence, we use loopy belief propagation BIBREF9 , BIBREF10 for computation of approximate variable and factor marginals. Loopy BP is an iterative message passing algorithm that sends messages between variables and factors in a factor graph. The message updates from variable INLINEFORM0 , with neighboring factors INLINEFORM1 , to factor INLINEFORM2 is DISPLAYFORM0
The message from factor INLINEFORM0 to variable INLINEFORM1 is DISPLAYFORM0
where INLINEFORM0 denote an assignment to the subset of variables adjacent to factor INLINEFORM1 , and INLINEFORM2 is the assignment for variable INLINEFORM3 . Message updates are performed asynchronously in our model. Our message passing schedule was similar to that of foward-backward: the forward pass sends all messages from the first time step in the direction of the last. Messages to/from pairwise factors are included in this forward pass. The backward pass sends messages in the direction from the last time step back to the first. This process is repeated until convergence. We say that BP has converged when the maximum residual error BIBREF11 over all messages is below some threshold. Upon convergence, we obtain the belief values of variables and factors as, DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are normalization constants ensuring that the beliefs for a variable INLINEFORM2 and factor INLINEFORM3 sum-to-one. In this way, we can use the beliefs as approximate marginal probabilities.
Learning and Decoding
We perform end-to-end training of the neural factor graph by following the (approximate) gradient of the log-likelihood INLINEFORM0 . The true gradient requires access to the marginal probabilities for each factor, e.g. INLINEFORM1 where INLINEFORM2 denotes the subset of variables in factor INLINEFORM3 . For example, if INLINEFORM4 is a transition factor for tag INLINEFORM5 at timestep INLINEFORM6 , then INLINEFORM7 would be INLINEFORM8 and INLINEFORM9 . Following BIBREF7 , we replace these marginals with the beliefs INLINEFORM10 from loopy belief propagation. Consider the log-likelihood of a single example INLINEFORM11 . The partial derivative with respect to parameter INLINEFORM12 for each type of factor INLINEFORM13 is the difference of the observed features with the expected features under the model's (approximate) distribution as represented by the beliefs: INLINEFORM14
where INLINEFORM0 denotes all the factors of type INLINEFORM1 , and we have omitted any dependence on INLINEFORM2 and INLINEFORM3 for brevity— INLINEFORM4 is accessible through the factor index INLINEFORM5 . For the neural network factors, the features are given by a biLSTM. We backpropagate through to the biLSTM parameters using the partial derivative below, INLINEFORM6
where INLINEFORM0 is the variable belief corresponding to variable INLINEFORM1 .
To predict a sequence of tag sets INLINEFORM0 at test time, we use minimum Bayes risk (MBR) decoding BIBREF13 , BIBREF14 for Hamming loss over tags. For a variable INLINEFORM1 representing tag INLINEFORM2 at timestep INLINEFORM3 , we take DISPLAYFORM0
where INLINEFORM0 ranges over the possible labels for tag INLINEFORM1 .
Dataset
We used the Universal Dependencies Treebank UD v2.1 BIBREF0 for our experiments. We picked four low-resource/high-resource language pairs, each from a different family: Danish/Swedish (da/sv), Russian/Bulgarian (ru/bg), Finnish/Hungarian (fi/hu), Spanish/Portuguese (es/pt). Picking languages from different families would ensure that we obtain results that are on average consistent across languages.
The sizes of the training and evaluation sets are specified in Table TABREF31 . In order to simulate low-resource settings, we follow the experimental procedure from BIBREF5 . We restrict the number of sentences of the target language ( INLINEFORM0 ) in the training set to 100 or 1000 sentences. We also augment the tag sets in our training data by adding a INLINEFORM1 label for all tags that are not seen for a token. It is expected that our model will learn which tags are unlikely to occur given the variable dependencies in the factor graph. The dev set and test set are only in the target language. From Table TABREF32 , we can see there is also considerable variance in the number of unique tags and tag sets found in each of these language pairs.
Baseline Tagger
As the baseline tagger model, we re-implement the specific model from BIBREF5 that uses a language-specific softmax layer. Their model architecture uses a character biLSTM embedder to obtain a vector representation for each token, which is used as input in a word-level biLSTM. The output space of their model is all the tag sets seen in the training data. This work achieves strong performance on several languages from UD on the task of morphological tagging and is a strong baseline.
Training Regimen
We followed the parameter settings from BIBREF5 for the baseline tagger and the neural component of the FCRF-LSTM model. For both models, we set the input embedding and linear layer dimension to 128. We used 2 hidden layers for the LSTM where the hidden layer dimension was set to 256 and a dropout BIBREF15 of 0.2 was enforced during training. All our models were implemented in the PyTorch toolkit BIBREF16 . The parameters of the character biLSTM and the word biLSTM were initialized randomly. We trained the baseline models and the neural factor graph model with SGD and Adam respectively for 10 epochs each, in batches of 64 sentences. These optimizers gave the best performances for the respective models.
For the FCRF, we initialized transition and pairwise parameters with zero weights, which was important to ensure stable training. We considered BP to have reached convergence when the maximum residual error was below 0.05 or if the maximum number of iterations was reached (set to 40 in our experiments). We found that in cross-lingual experiments, when INLINEFORM0 , the relatively large amount of data in the HRL was causing our model to overfit on the HRL and not generalize well to the LRL. As a solution to this, we upsampled the LRL data by a factor of 10 when INLINEFORM1 for both the baseline and the proposed model.
Previous work on morphological analysis BIBREF5 , BIBREF17 has reported scores on average token-level accuracy and F1 measure. The average token level accuracy counts a tag set prediction as correct only it is an exact match with the gold tag set. On the other hand, F1 measure is measured on a tag-by-tag basis, which allows it to give partial credit to partially correct tag sets. Based on the characteristics of each evaluation measure, Accuracy will favor tag-set prediction models (like the baseline), and F1 measure will favor tag-wise prediction models (like our proposed method). Given the nature of the task, it seems reasonable to prefer getting some of the tags correct (e.g. Noun+Masc+Sing becomes Noun+Fem+Sing), instead of missing all of them (e.g. Noun+Masc+Sing becomes Adj+Fem+Plur). F-score gives partial credit for getting some of the tags correct, while tagset-level accuracy will treat these two mistakes equally. Based on this, we believe that F-score is intuitively a better metric. However, we report both scores for completeness.
Main Results
First, we report the results in the case of monolingual training in Table TABREF33 . The first row for each language pair reports the results for our reimplementation of cotterell2017crossling, and the second for our full model. From these results, we can see that we obtain improvements on the F-measure over the baseline method in most experimental settings except BG with INLINEFORM0 . In a few more cases, the baseline model sometimes obtains higher accuracy scores for the reason described in UID38 .
In our cross-lingual experiments shown in Table TABREF37 , we also note F-measure improvements over the baseline model with the exception of DA/SV when INLINEFORM0 . We observe that the improvements are on average stronger when INLINEFORM1 . This suggests that our model performs well with very little data due to its flexibility to generate any tag set, including those not observed in the training data. The strongest improvements are observed for FI/HU. This is likely because the number of unique tags is the highest in this language pair and our method scales well with the number of tags due to its ability to make use of correlations between the tags in different tag sets.
To examine the utility of our transition and pairwise factors, we also report results on ablation experiments by removing transition and pairwise factors completely from the model in Table TABREF40 . Ablation experiments for each factor showed decreases in scores relative to the model where both factors are present, but the decrease attributed to the pairwise factors is larger, in both the monolingual and cross-lingual cases. Removing both factors from our proposed model results in a further decrease in the scores. These differences were found to be more significant in the case when INLINEFORM0 .
Upon looking at the tag set predictions made by our model, we found instances where our model utilizes variable dependencies to predict correct labels. For instance, for a specific phrase in Portuguese (um estado), the baseline model predicted {POS: Det, Gender: Masc, Number: Sing} INLINEFORM0 , {POS: Noun, Gender: Fem (X), Number: Sing} INLINEFORM1 , whereas our model was able to get the gender correct because of the transition factors in our model.
What is the Model Learning?
One of the major advantages of our model is the ability to interpret what the model has learned by looking at the trained parameter weights. We investigated both language-generic and language-specific patterns learned by our parameters:
Language-Generic: We found evidence for several syntactic properties learned by the model parameters. For instance, in Figure FIGREF42 , we visualize the generic ( INLINEFORM0 ) transition weights of the POS tags in Ru/Bg. Several universal trends such as determiners and adjectives followed by nouns can be seen. In Figure FIGREF43 , we also observed that infinitive has a strong correlation for NULL tense, which follows the universal phenomena that infinitives don't have tense.
Language Specific Trends: We visualized the learnt language-specific weights and looked for evidence of patterns corresponding to linguistic phenomenas observed in a language of interest. For instance, in Russian, verbs are gender-specific in past tense but not in other tenses. To analyze this, we plotted pairwise weights for Gender/Tense in Figure FIGREF45 and verified strong correlations between the past tense and all gender labels.
Related Work
There exist several variations of the task of prediction of morphological information from annotated data: paradigm completion BIBREF18 , BIBREF19 , morphological reinflection BIBREF20 , segmentation BIBREF21 , BIBREF22 and tagging. Work on morphological tagging has broadly focused on structured prediction models such as CRFs, and neural network models. Amongst structured prediction approaches, BIBREF23 proposed a factor-graph based model that performed joint morphological tagging and parsing. BIBREF24 , BIBREF25 proposed the use of a higher-order CRF that is approximated using coarse-to-fine decoding. BIBREF26 proposed joint lemmatization and tagging using this framework. BIBREF27 was the first work that performed experiments on multilingual morphological tagging. They proposed an exponential model and the use of a morphological dictionary. BIBREF17 , BIBREF28 proposed a model that used tag projection of type and token constraints from a resource-rich language to a low-resource language for tagging.
Most recent work has focused on character-based neural models BIBREF29 , that can handle rare words and are hence more useful to model morphology than word-based models. These models first obtain a character-level representation of a token from a biLSTM or CNN, which is provided to a word-level biLSTM tagger. BIBREF29 , BIBREF30 compared several neural architectures to obtain these character-based representations and found the effect of the neural network architecture to be minimal given the networks are carefully tuned. Cross-lingual transfer learning has previously boosted performance on tasks such as translation BIBREF31 and POS tagging BIBREF32 , BIBREF33 . BIBREF5 proposed a cross-lingual character-level neural morphological tagger. They experimented with different strategies to facilitate cross-lingual training: a language ID for each token, a language-specific softmax and a joint language identification and tagging model. We have used this work as a baseline model for comparing with our proposed method.
In contrast to earlier work on morphological tagging, we use a hybrid of neural and graphical model approaches. This combination has several advantages: we can make use of expressive feature representations from neural models while ensuring that our model is interpretable. Our work is similar in spirit to BIBREF8 and BIBREF34 , who proposed models that use a CRF with features from neural models. For our graphical model component, we used a factorial CRF BIBREF7 , which is a generalization of a linear chain CRF with additional pairwise factors between cotemporal variables.
Conclusion and Future Work
In this work, we proposed a novel framework for sequence tagging that combines neural networks and graphical models, and showed its effectiveness on the task of morphological tagging. We believe this framework can be extended to other sequence labeling tasks in NLP such as semantic role labeling. Due to the robustness of the model across languages, we believe it can also be scaled to perform morphological tagging for multiple languages together.
Acknowledgments
The authors would like to thank David Mortensen, Soumya Wadhwa and Maria Ryskina for useful comments about this work. We would also like to thank the reviewers who gave valuable feedback to improve the paper. This project was supported in part by an Amazon Academic Research Award and Google Faculty Award. | The baseline model BIBREF5 we compare with regards the output space of the model as a subset INLINEFORM2 where INLINEFORM3 is the set of all tag sets seen in this training data. |
9f5507a8c835c4671020d7d310fff2930d44e75a | 9f5507a8c835c4671020d7d310fff2930d44e75a_0 | Q: What languages are explored?
Text: Introduction
Morphological analysis (hajivc1998tagging, oflazer1994tagging, inter alia) is the task of predicting fine-grained annotations about the syntactic properties of tokens in a language such as part-of-speech, case, or tense. For instance, in Figure FIGREF2 , the given Portuguese sentence is labeled with the respective morphological tags such as Gender and its label value Masculine.
The accuracy of morphological analyzers is paramount, because their results are often a first step in the NLP pipeline for tasks such as translation BIBREF1 , BIBREF2 and parsing BIBREF3 , and errors in the upstream analysis may cascade to the downstream tasks. One difficulty, however, in creating these taggers is that only a limited amount of annotated data is available for a majority of the world's languages to learn these morphological taggers. Fortunately, recent efforts in morphological annotation follow a standard annotation schema for these morphological tags across languages, and now the Universal Dependencies Treebank BIBREF0 has tags according to this schema in 60 languages.
cotterell2017crossling have recently shown that combining this shared schema with cross-lingual training on a related high-resource language (HRL) gives improved performance on tagging accuracy for low-resource languages (LRLs). The output space of this model consists of tag sets such as {POS: Adj, Gender: Masc, Number: Sing}, which are predicted for a token at each time step. However, this model relies heavily on the fact that the entire space of tag sets for the LRL must match those of the HRL, which is often not the case, either due to linguistic divergence or small differences in the annotation schemes between the two languages. For instance, in Figure FIGREF2 “refrescante” is assigned a gender in the Portuguese UD treebank, but not in the Spanish UD treebank.
In this paper, we propose a method that instead of predicting full tag sets, makes predictions over single tags separately but ties together each decision by modeling variable dependencies between tags over time steps (e.g. capturing the fact that nouns frequently occur after determiners) and pairwise dependencies between all tags at a single time step (e.g. capturing the fact that infinitive verb forms don't have tense). The specific model is shown in Figure FIGREF4 , consisting of a factorial conditional random field (FCRF; sutton2007dynamic) with neural network potentials calculated by long short-term memory (LSTM; BIBREF4 ) at every variable node (§ SECREF3 ). Learning and inference in the model is made tractable through belief propagation over the possible tag combinations, allowing the model to consider an exponential label space in polynomial time (§ SECREF24 ).
This model has several advantages:
In the following sections, we describe the model and these results in more detail.
Problem Formulation
Formally, we define the problem of morphological analysis as the task of mapping a length- INLINEFORM0 string of tokens INLINEFORM1 into the target morphological tag sets for each token INLINEFORM2 . For the INLINEFORM3 th token, the target label INLINEFORM4 defines a set of tags (e.g. {Gender: Masc, Number: Sing, POS: Verb}). An annotation schema defines a set INLINEFORM5 of INLINEFORM6 possible tag types and with the INLINEFORM7 th type (e.g. Gender) defining its set of possible labels INLINEFORM8 (e.g. {Masc, Fem, Neu}) such that INLINEFORM9 . We must note that not all tags or attributes need to be specified for a token; usually, a subset of INLINEFORM10 is specified for a token and the remaining tags can be treated as mapping to a INLINEFORM11 value. Let INLINEFORM12 denote the set of all possible tag sets.
Baseline: Tag Set Prediction
Data-driven models for morphological analysis are constructed using training data INLINEFORM0 consisting of INLINEFORM1 training examples. The baseline model BIBREF5 we compare with regards the output space of the model as a subset INLINEFORM2 where INLINEFORM3 is the set of all tag sets seen in this training data. Specifically, they solve the task as a multi-class classification problem where the classes are individual tag sets. In low-resource scenarios, this indicates that INLINEFORM4 and even for those tag sets existing in INLINEFORM5 we may have seen very few training examples. The conditional probability of a sequence of tag sets given the sentence is formulated as a 0th order CRF. DISPLAYFORM0
Instead, we would like to be able to generate any combination of tags from the set INLINEFORM0 , and share statistical strength among similar tag sets.
A Relaxation: Tag-wise Prediction
As an alternative, we could consider a model that performs prediction for each tag's label INLINEFORM0 independently. DISPLAYFORM0
This formulation has an advantage: the tag-predictions within a single time step are now independent, it is now easy to generate any combination of tags from INLINEFORM0 . On the other hand, now it is difficult to model the interdependencies between tags in the same tag set INLINEFORM1 , a major disadvantage over the previous model. In the next section, we describe our proposed neural factor graph model, which can model not only dependencies within tags for a single token, but also dependencies across time steps while still maintaining the flexibility to generate any combination of tags from INLINEFORM2 .
Neural Factor Graph Model
Due to the correlations between the syntactic properties that are represented by morphological tags, we can imagine that capturing the relationships between these tags through pairwise dependencies can inform the predictions of our model. These dependencies exist both among tags for the same token (intra-token pairwise dependencies), and across tokens in the sentence (inter-token transition dependencies). For instance, knowing that a token's POS tag is a Noun, would strongly suggest that this token would have a INLINEFORM0 label for the tag Tense, with very few exceptions BIBREF6 . In a language where nouns follow adjectives, a tag set prediction {POS: Adj, Gender: Fem} might inform the model that the next token is likely to be a noun and have the same gender. The baseline model can not explicitly model such interactions given their factorization in equation EQREF10 .
To incorporate the dependencies discussed above, we define a factorial CRF BIBREF7 , with pairwise links between cotemporal variables and transition links between the same types of tags. This model defines a distribution over the tag-set sequence INLINEFORM0 given the input sentence INLINEFORM1 as, DISPLAYFORM0
where INLINEFORM0 is the set of factors in the factor graph (as shown in Figure FIGREF4 ), INLINEFORM1 is one such factor, and INLINEFORM2 is the assignment to the subset of variables neighboring factor INLINEFORM3 . We define three types of potential functions: neural INLINEFORM4 , pairwise INLINEFORM5 , and transition INLINEFORM6 , described in detail below.
Neural Factors
The flexibility of our formulation allows us to include any form of custom-designed potentials in our model. Those for the neural factors have a fairly standard log-linear form, DISPLAYFORM0
except that the features INLINEFORM0 are themselves given by a neural network. There is one such factor per variable. We obtain our neural factors using a biLSTM over the input sequence INLINEFORM1 , where the input word embedding for each token is obtained from a character-level biLSTM embedder. This component of our model is similar to the model proposed by BIBREF5 . Given an input token INLINEFORM2 , we compute an input embedding INLINEFORM3 as, DISPLAYFORM0
Here, INLINEFORM0 is a character-level LSTM function that returns the last hidden state. This input embedding INLINEFORM1 is then used in the biLSTM tagger to compute an output representation INLINEFORM2 . Finally, the scores INLINEFORM3 are obtained as, DISPLAYFORM0
We use a language-specific linear layer with weights INLINEFORM0 and bias INLINEFORM1 .
Pairwise Factors
As discussed previously, the pairwise factors are crucial for modeling correlations between tags. The pairwise factor potential for a tag INLINEFORM0 and tag INLINEFORM1 at timestep INLINEFORM2 is given in equation EQREF20 . Here, the dimension of INLINEFORM3 is INLINEFORM4 . These scores are used to define the neural factors as, DISPLAYFORM0
Transition Factors
Previous work has experimented with the use of a linear chain CRF with factors from a neural network BIBREF8 for sequence tagging tasks. We hypothesize that modeling transition factors in a similar manner can allow the model to utilize information about neighboring tags and capture word order features of the language. The transition factor for tag INLINEFORM0 and timestep INLINEFORM1 is given below for variables INLINEFORM2 and INLINEFORM3 . The dimension of INLINEFORM4 is INLINEFORM5 . DISPLAYFORM0
In our experiments, INLINEFORM0 and INLINEFORM1 are simple indicator features for the values of tag variables with no dependence on INLINEFORM2 .
Language-Specific Weights
As an enhancement to the information encoded in the transition and pairwise factors, we experiment with training general and language-specific parameters for the transition and the pairwise weights. We define the weight matrix INLINEFORM0 to learn the general trends that hold across both languages, and the weights INLINEFORM1 to learn the exceptions to these trends. In our model, we sum both these parameter matrices before calculating the transition and pairwise factors. For instance, the transition weights INLINEFORM2 are calculated as INLINEFORM3 .
Loopy Belief Propagation
Since the graph from Figure FIGREF4 is a loopy graph, performing exact inference can be expensive. Hence, we use loopy belief propagation BIBREF9 , BIBREF10 for computation of approximate variable and factor marginals. Loopy BP is an iterative message passing algorithm that sends messages between variables and factors in a factor graph. The message updates from variable INLINEFORM0 , with neighboring factors INLINEFORM1 , to factor INLINEFORM2 is DISPLAYFORM0
The message from factor INLINEFORM0 to variable INLINEFORM1 is DISPLAYFORM0
where INLINEFORM0 denote an assignment to the subset of variables adjacent to factor INLINEFORM1 , and INLINEFORM2 is the assignment for variable INLINEFORM3 . Message updates are performed asynchronously in our model. Our message passing schedule was similar to that of foward-backward: the forward pass sends all messages from the first time step in the direction of the last. Messages to/from pairwise factors are included in this forward pass. The backward pass sends messages in the direction from the last time step back to the first. This process is repeated until convergence. We say that BP has converged when the maximum residual error BIBREF11 over all messages is below some threshold. Upon convergence, we obtain the belief values of variables and factors as, DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are normalization constants ensuring that the beliefs for a variable INLINEFORM2 and factor INLINEFORM3 sum-to-one. In this way, we can use the beliefs as approximate marginal probabilities.
Learning and Decoding
We perform end-to-end training of the neural factor graph by following the (approximate) gradient of the log-likelihood INLINEFORM0 . The true gradient requires access to the marginal probabilities for each factor, e.g. INLINEFORM1 where INLINEFORM2 denotes the subset of variables in factor INLINEFORM3 . For example, if INLINEFORM4 is a transition factor for tag INLINEFORM5 at timestep INLINEFORM6 , then INLINEFORM7 would be INLINEFORM8 and INLINEFORM9 . Following BIBREF7 , we replace these marginals with the beliefs INLINEFORM10 from loopy belief propagation. Consider the log-likelihood of a single example INLINEFORM11 . The partial derivative with respect to parameter INLINEFORM12 for each type of factor INLINEFORM13 is the difference of the observed features with the expected features under the model's (approximate) distribution as represented by the beliefs: INLINEFORM14
where INLINEFORM0 denotes all the factors of type INLINEFORM1 , and we have omitted any dependence on INLINEFORM2 and INLINEFORM3 for brevity— INLINEFORM4 is accessible through the factor index INLINEFORM5 . For the neural network factors, the features are given by a biLSTM. We backpropagate through to the biLSTM parameters using the partial derivative below, INLINEFORM6
where INLINEFORM0 is the variable belief corresponding to variable INLINEFORM1 .
To predict a sequence of tag sets INLINEFORM0 at test time, we use minimum Bayes risk (MBR) decoding BIBREF13 , BIBREF14 for Hamming loss over tags. For a variable INLINEFORM1 representing tag INLINEFORM2 at timestep INLINEFORM3 , we take DISPLAYFORM0
where INLINEFORM0 ranges over the possible labels for tag INLINEFORM1 .
Dataset
We used the Universal Dependencies Treebank UD v2.1 BIBREF0 for our experiments. We picked four low-resource/high-resource language pairs, each from a different family: Danish/Swedish (da/sv), Russian/Bulgarian (ru/bg), Finnish/Hungarian (fi/hu), Spanish/Portuguese (es/pt). Picking languages from different families would ensure that we obtain results that are on average consistent across languages.
The sizes of the training and evaluation sets are specified in Table TABREF31 . In order to simulate low-resource settings, we follow the experimental procedure from BIBREF5 . We restrict the number of sentences of the target language ( INLINEFORM0 ) in the training set to 100 or 1000 sentences. We also augment the tag sets in our training data by adding a INLINEFORM1 label for all tags that are not seen for a token. It is expected that our model will learn which tags are unlikely to occur given the variable dependencies in the factor graph. The dev set and test set are only in the target language. From Table TABREF32 , we can see there is also considerable variance in the number of unique tags and tag sets found in each of these language pairs.
Baseline Tagger
As the baseline tagger model, we re-implement the specific model from BIBREF5 that uses a language-specific softmax layer. Their model architecture uses a character biLSTM embedder to obtain a vector representation for each token, which is used as input in a word-level biLSTM. The output space of their model is all the tag sets seen in the training data. This work achieves strong performance on several languages from UD on the task of morphological tagging and is a strong baseline.
Training Regimen
We followed the parameter settings from BIBREF5 for the baseline tagger and the neural component of the FCRF-LSTM model. For both models, we set the input embedding and linear layer dimension to 128. We used 2 hidden layers for the LSTM where the hidden layer dimension was set to 256 and a dropout BIBREF15 of 0.2 was enforced during training. All our models were implemented in the PyTorch toolkit BIBREF16 . The parameters of the character biLSTM and the word biLSTM were initialized randomly. We trained the baseline models and the neural factor graph model with SGD and Adam respectively for 10 epochs each, in batches of 64 sentences. These optimizers gave the best performances for the respective models.
For the FCRF, we initialized transition and pairwise parameters with zero weights, which was important to ensure stable training. We considered BP to have reached convergence when the maximum residual error was below 0.05 or if the maximum number of iterations was reached (set to 40 in our experiments). We found that in cross-lingual experiments, when INLINEFORM0 , the relatively large amount of data in the HRL was causing our model to overfit on the HRL and not generalize well to the LRL. As a solution to this, we upsampled the LRL data by a factor of 10 when INLINEFORM1 for both the baseline and the proposed model.
Previous work on morphological analysis BIBREF5 , BIBREF17 has reported scores on average token-level accuracy and F1 measure. The average token level accuracy counts a tag set prediction as correct only it is an exact match with the gold tag set. On the other hand, F1 measure is measured on a tag-by-tag basis, which allows it to give partial credit to partially correct tag sets. Based on the characteristics of each evaluation measure, Accuracy will favor tag-set prediction models (like the baseline), and F1 measure will favor tag-wise prediction models (like our proposed method). Given the nature of the task, it seems reasonable to prefer getting some of the tags correct (e.g. Noun+Masc+Sing becomes Noun+Fem+Sing), instead of missing all of them (e.g. Noun+Masc+Sing becomes Adj+Fem+Plur). F-score gives partial credit for getting some of the tags correct, while tagset-level accuracy will treat these two mistakes equally. Based on this, we believe that F-score is intuitively a better metric. However, we report both scores for completeness.
Main Results
First, we report the results in the case of monolingual training in Table TABREF33 . The first row for each language pair reports the results for our reimplementation of cotterell2017crossling, and the second for our full model. From these results, we can see that we obtain improvements on the F-measure over the baseline method in most experimental settings except BG with INLINEFORM0 . In a few more cases, the baseline model sometimes obtains higher accuracy scores for the reason described in UID38 .
In our cross-lingual experiments shown in Table TABREF37 , we also note F-measure improvements over the baseline model with the exception of DA/SV when INLINEFORM0 . We observe that the improvements are on average stronger when INLINEFORM1 . This suggests that our model performs well with very little data due to its flexibility to generate any tag set, including those not observed in the training data. The strongest improvements are observed for FI/HU. This is likely because the number of unique tags is the highest in this language pair and our method scales well with the number of tags due to its ability to make use of correlations between the tags in different tag sets.
To examine the utility of our transition and pairwise factors, we also report results on ablation experiments by removing transition and pairwise factors completely from the model in Table TABREF40 . Ablation experiments for each factor showed decreases in scores relative to the model where both factors are present, but the decrease attributed to the pairwise factors is larger, in both the monolingual and cross-lingual cases. Removing both factors from our proposed model results in a further decrease in the scores. These differences were found to be more significant in the case when INLINEFORM0 .
Upon looking at the tag set predictions made by our model, we found instances where our model utilizes variable dependencies to predict correct labels. For instance, for a specific phrase in Portuguese (um estado), the baseline model predicted {POS: Det, Gender: Masc, Number: Sing} INLINEFORM0 , {POS: Noun, Gender: Fem (X), Number: Sing} INLINEFORM1 , whereas our model was able to get the gender correct because of the transition factors in our model.
What is the Model Learning?
One of the major advantages of our model is the ability to interpret what the model has learned by looking at the trained parameter weights. We investigated both language-generic and language-specific patterns learned by our parameters:
Language-Generic: We found evidence for several syntactic properties learned by the model parameters. For instance, in Figure FIGREF42 , we visualize the generic ( INLINEFORM0 ) transition weights of the POS tags in Ru/Bg. Several universal trends such as determiners and adjectives followed by nouns can be seen. In Figure FIGREF43 , we also observed that infinitive has a strong correlation for NULL tense, which follows the universal phenomena that infinitives don't have tense.
Language Specific Trends: We visualized the learnt language-specific weights and looked for evidence of patterns corresponding to linguistic phenomenas observed in a language of interest. For instance, in Russian, verbs are gender-specific in past tense but not in other tenses. To analyze this, we plotted pairwise weights for Gender/Tense in Figure FIGREF45 and verified strong correlations between the past tense and all gender labels.
Related Work
There exist several variations of the task of prediction of morphological information from annotated data: paradigm completion BIBREF18 , BIBREF19 , morphological reinflection BIBREF20 , segmentation BIBREF21 , BIBREF22 and tagging. Work on morphological tagging has broadly focused on structured prediction models such as CRFs, and neural network models. Amongst structured prediction approaches, BIBREF23 proposed a factor-graph based model that performed joint morphological tagging and parsing. BIBREF24 , BIBREF25 proposed the use of a higher-order CRF that is approximated using coarse-to-fine decoding. BIBREF26 proposed joint lemmatization and tagging using this framework. BIBREF27 was the first work that performed experiments on multilingual morphological tagging. They proposed an exponential model and the use of a morphological dictionary. BIBREF17 , BIBREF28 proposed a model that used tag projection of type and token constraints from a resource-rich language to a low-resource language for tagging.
Most recent work has focused on character-based neural models BIBREF29 , that can handle rare words and are hence more useful to model morphology than word-based models. These models first obtain a character-level representation of a token from a biLSTM or CNN, which is provided to a word-level biLSTM tagger. BIBREF29 , BIBREF30 compared several neural architectures to obtain these character-based representations and found the effect of the neural network architecture to be minimal given the networks are carefully tuned. Cross-lingual transfer learning has previously boosted performance on tasks such as translation BIBREF31 and POS tagging BIBREF32 , BIBREF33 . BIBREF5 proposed a cross-lingual character-level neural morphological tagger. They experimented with different strategies to facilitate cross-lingual training: a language ID for each token, a language-specific softmax and a joint language identification and tagging model. We have used this work as a baseline model for comparing with our proposed method.
In contrast to earlier work on morphological tagging, we use a hybrid of neural and graphical model approaches. This combination has several advantages: we can make use of expressive feature representations from neural models while ensuring that our model is interpretable. Our work is similar in spirit to BIBREF8 and BIBREF34 , who proposed models that use a CRF with features from neural models. For our graphical model component, we used a factorial CRF BIBREF7 , which is a generalization of a linear chain CRF with additional pairwise factors between cotemporal variables.
Conclusion and Future Work
In this work, we proposed a novel framework for sequence tagging that combines neural networks and graphical models, and showed its effectiveness on the task of morphological tagging. We believe this framework can be extended to other sequence labeling tasks in NLP such as semantic role labeling. Due to the robustness of the model across languages, we believe it can also be scaled to perform morphological tagging for multiple languages together.
Acknowledgments
The authors would like to thank David Mortensen, Soumya Wadhwa and Maria Ryskina for useful comments about this work. We would also like to thank the reviewers who gave valuable feedback to improve the paper. This project was supported in part by an Amazon Academic Research Award and Google Faculty Award. | Danish/Swedish (da/sv), Russian/Bulgarian (ru/bg), Finnish/Hungarian (fi/hu), Spanish/Portuguese (es/pt) |
96ee62407b1ca2a6538c218781e73e8fbf45094a | 96ee62407b1ca2a6538c218781e73e8fbf45094a_0 | Q: How many human subjects were used in the study?
Text: Introduction
Recent years have seen a rapid increase of robotic deployment, beyond traditional applications in cordoned-off workcells in factories, into new, more collaborative use-cases. For example, social robotics and service robotics have targeted scenarios like rehabilitation, where a robot operates in close proximity to a human. While industrial applications envision full autonomy, these collaborative scenarios involve interaction between robots and humans and require effective communication. For instance, a robot that is not able to reach an object may ask for a pick-and-place to be executed in the context of collaborative assembly. Or, in the context of a robotic assistant, a robot may ask for confirmation of a pick-and-place requested by a person.
When the robot's form permits, researchers can design such interactions using principles informed by research on embodied face-to-face human–human communication. In particular, by realizing pointing gestures, an articulated robotic arm with a directional end-effector can exploit a fundamental ingredient of human communication BIBREF0. This has motivated roboticists to study simple pointing gestures that identify objects BIBREF1, BIBREF2, BIBREF3. This paper develops an empirically-grounded approach to robotic pointing that extends the range of physical settings, task contexts and communicative goals of robotic gestures. This is a step towards the richer and diverse interpretations that human pointing exhibits BIBREF4.
This work has two key contributions. First, we create a systematic dataset, involving over 7000 human judgments, where crowd workers describe their interpretation of animations of simulated robots instructing pick-and-place tasks. Planned comparisons allow us to compare pointing actions that identify objects (referential pointing) with those that identify locations (locating pointing). They also allow us to quantify the effect of accompanying speech, task constraints and scene complexity, as well as variation in the spatial content of the scene. This new resource documents important differences in the way pointing is interpreted in different cases. For example, referential pointing is typically robust to the exactness of the pointing gesture, whereas locating pointing is much more sensitive and requires more deliberate pointing to ensure a correct interpretation. The Experiment Design section explains the overall process of data collection, the power analysis for the preregistered protocol, and the content presented to subjects across conditions.
The second contribution is a set of interpretive principles, inspired by the literature on vague communication, that summarize the findings about robot pointing. They suggest that pointing selects from a set of candidate interpretations determined by the type of information specified, the possibilities presented by the scene, and the options compatible with the current task. In particular, we propose that pointing picks out all candidates that are not significantly further from the pointing ray than the closest alternatives. Based on our empirical results, we present design principles that formalize the relevant notions of “available alternatives” and “significantly further away”, which can be used in future pointing robots. The Analysis and Design Principles sections explain and justify this approach.
Related work
This paper focuses on the fundamental AI challenge of effective embodied communication, by proposing empirically determined generative rules for robotic pointing, including not only referential pointing but also pointing that is location-oriented in nature. Prior research has recognized the importance of effective communication by embracing the diverse modalities that AI agents can use to express information. In particular, perceiving physical actions BIBREF5 is often essential for socially-embedded behavior BIBREF6, as well as for understanding human demonstrations and inferring solutions that can be emulated by robots BIBREF7. Animated agents have long provided resources for AI researchers to experiment with models of conversational interaction including gesture BIBREF8, while communication using hand gestures BIBREF9 has played a role in supporting intelligent human-computer interaction.
Enabling robots to understand and generate instructions to collaboratively carry out tasks with humans is an active area of research in natural language processing and human-robot interaction BIBREF10, BIBREF11. Since robotic hardware capabilities have increased, robots are increasingly seen as a viable platform for expressing and studying behavioral models BIBREF12. In the context of human-robot interaction, deictic or pointing gestures have been used as a form of communication BIBREF13. More recent work has developed richer abilities for referring to objects by using pre-recorded, human-guided motions BIBREF14, or using mixed-reality, multi-modal setups BIBREF15.
Particular efforts in robotics have looked at making pointing gestures legible, adapting the process of motion planning so that robot movements are correctly understood as being directed toward the location of a particular object in space BIBREF2, BIBREF3. The current work uses gestures, including pointing gestures and demonstrations, that are legible in this sense. It goes on to explore how precise the targeting has to be to signal an intended interpretation.
In natural language processing research, it's common to use an expanded pointing cone to describe the possible target objects for a pointing gesture, based on findings about human pointing BIBREF16, BIBREF17. Pointing cone models have also been used to model referential pointing in human–robot interaction BIBREF18, BIBREF19. In cluttered scenes, the pointing cone typically includes a region with many candidate referents. Understanding and generating object references in these situations involves combining pointing with natural language descriptions BIBREF1, BIBREF20. While we also find that many pointing gestures are ambiguous and can benefit from linguistic supplementation, our results challenge the assumption of a uniform pointing cone. We argue for an alternative, context-sensitive model.
In addition to gestures that identify objects, we also look at pointing gestures that identify points in space. The closest related work involves navigation tasks, where pointing can be used to discriminate direction (e.g., left vs right) BIBREF21, BIBREF22. The spatial information needed for pick-and-place tasks is substantially more precise. Our findings suggest that this precision significantly impacts how pointing is interpreted and how it should be modeled.
Communicating Pick-and-Place
This section provides a formalization of pick-and-place tasks and identifies information required to specify them.
Manipulator: Robots that can physically interact with their surroundings are called manipulators, of which robotic arms are the prime example.
Workspace: The manipulator operates in a 3D workspace $\mathcal {W} \subseteq \mathbb {R}^3$. The workspace also contains a stable surface of interest defined by a plane $S\subset \mathcal {W}$ along with various objects. To represent 3D coordinates of workspace positions, we use $x\in \mathcal {W}$.
End-effector: The tool-tips or end-effectors are geometries, often attached at the end of a robotic arm, that can interact with objects in the environment. These form a manipulator's chief mode of picking and placing objects of interest and range from articulated fingers to suction cups. A subset of the workspace that the robot can reach with its end-effector is called the reachable workspace. The end-effector in this work is used as a pointing indicator.
Pick-and-place: Given a target object in the workspace, a pick-and-place task requires the object to be picked up from its initial position and orientation, and placed at a final position and orientation. When a manipulator executes this task in its reachable workspace, it uses its end-effector. The rest of this work ignores the effect of the object's orientation by considering objects with sufficient symmetry. Given this simplification, the pick-and-place task can be viewed as a transition from an initial position $x_{\textit {init}}\in \mathcal {W}$ to a final placement position $x_{\textit {final}}\in \mathcal {W}$. Thus, a pick-and-place task can be specified with a tuple
Pointing Action: Within its reachable workspace the end-effector of the manipulator can attain different orientations to fully specify a reachable pose $p$, which describes its position and orientation. The robots we study have a directional tooltip that viewers naturally see as projecting a ray $r$ along its axis outward into the scene. In understanding pointing as communication, the key question is the relationship between the ray $r$ and the spatial values $x_{\textit {init}}$ and $x_{\textit {final}}$ that define the pick-and-place task.
To make this concrete, we distinguish between the target of pointing and the intent of pointing. Given the ray $r$ coming out of the end-effector geometry, we define the target of the pointing as the intersection of this ray on the stable surface,
Meanwhile, the intent of pointing specifies one component of a pick-and-place task. There are two cases:
Referential Pointing: The pointing action is intended to identify a target object $o$ to be picked up. This object is the referent of such an action. We can find $x_{\textit {init}}$, based on the present position of $o$.
Locating Pointing: The pointing action is intended to identify the location in the workspace where the object needs to be placed, i.e, $x_{\textit {final}}$.
We study effective ways to express intent for a pick-and-place task. In other words, what is the relationship between a pointing ray $r$ and the location $x_{\textit {init}}$ or $x_{\textit {final}}$ that it is intended to identify? To assess these relationships, we ask human observers to view animations expressing pick-and-place tasks and classify their interpretations. To understand the factors involved, we investigate a range of experimental conditions.
Experiments
Our experiments share a common animation platform, described in the Experimental Setup, and a common Data Collection protocol. The experiments differ in presenting subjects with a range of experimental conditions, as described in the corresponding section. All of the experiments described here together with the methods chosen to analyze the data were based on a private but approved pre-registration on aspredicted.org. The document is publicly available at: https://aspredicted.org/cg753.pdf.
Experiments ::: Experiment Setup
Each animation shows a simulated robot producing two pointing gestures to specify a pick-and-place task. Following the animation, viewers are asked whether a specific image represents a possible result of the specified task.
Robotic Platforms The experiments were performed on two different robotic geometries, based on a Rethink Baxter, and a Kuka IIWA14. The Baxter is a dual-arm manipulator with two arms mounted on either side of a static torso. The experiments only move the right arm of the Baxter. The Kuka consists of a single arm that is vertically mounted, i.e., points upward at the base. In the experiments the robots are shown with a singly fingered tool-tip, where the pointing ray is modeled as the direction of this tool-tip.
Note The real Baxter robot possesses a heads-up display that can be likened to a `head'. This has been removed in the simulations that were used in this study (as shown for example in Figure FIGREF7).
Workspace Setup Objects are placed in front of the manipulators. In certain trials a table is placed in front of the robot as well, and the objects rest in stable configurations on top of the table. A pick-and-place task is provided specified in terms of the positions of one of the objects.
Objects The objects used in the study include small household items like mugs, saucers and boxes (cuboids), that are all placed in front of the robots.
Motion Generation The end-effector of the manipulator is instructed to move to pre-specified waypoints, designed for the possibility of effective communication, that typically lie between the base of the manipulator and the object itself. Such waypoints fully specify both the position and orientation of the end-effector to satisfy pointing actions. The motions are performed by solving Inverse Kinematics for the end-effector geometry and moving the manipulator along these waypoints using a robotic motion planning library BIBREF23. The motions were replayed on the model of the robot, and rendered in Blender.
Pointing Action Generation Potential pointing targets are placed using a cone $C(r, \theta )$, where $r$ represents the pointing ray and $\theta $ represents the vertex angle of the cone. As illustrated in Fig FIGREF2, the cone allows us to assess the possible divergence between the pointing ray and the actual location of potential target objects on the rest surface $S$.
Given a pointing ray $r$, we assess the resolution of the pointing gesture by sampling $N$ object poses $p_i, i=1:N$ in $P=C(r, \theta ) \cap S$—the intersection of the pointing cone with the rest surface. While $p_i$ is the 6d pose for the object with translation $t \in R^3$ and orientation $R \in SO(3)$ only 2 degrees-of-freedom $(x, y)$ corresponding to $t$ are varied in the experiments. By fixing the $z$ coordinate for translation and restricting the z-axis of rotation to be perpendicular to $S$, it is ensured that the object rests in a physically stable configuration on the table.
The $N$ object poses are sampled by fitting an ellipse within $P$ and dividing the ellipse into 4 quadrants $q_1\ldots q_4$ (See Figure FIGREF2 (C)). Within each quadrant $q_i$ the $N/4$ $(x,y)$ positions are sampled uniformly at random. For certain experiments additional samples are generated with an objective to increase coverage of samples within the ellipse by utilizing a dispersion measure.
Speech Some experiments also included verbal cues with phrases like `Put that there' along with the pointing actions. It was very important for the pointing actions and these verbal cues to be in synchronization. To fulfill this we generate the voice using Amazon Polly with text written in SSML format and make sure that peak of the gesture (the moment a gesture comes to a stop) is in alignment with the peak of each audio phrase in the accompanying speech. During the generation of the video itself we took note of the peak moments of the gestures and then manipulated the duration between peaks of the audio using SSML to match them with gesture peaks after analyzing the audio with the open-source tool PRAAT (www.praat.org).
Experiments ::: Data Collection
Data collection was performed in Amazon Mechanical Turk. All subjects agreed to a consent form and were compensated at an estimated rate of USD 20 an hour. The subject-pool was restricted to non-colorblind US citizens. Subjects are presented a rendered video of the simulation where the robot performs one referential pointing action, and one locating pointing action which amounts to it pointing to an object, and then to a final location. During these executions synchronized speech is included in some of the trials to provide verbal cues.
Then on the same page, subjects see the image that shows the result of the pointing action. They are asked whether the result is (a) correct, (b) incorrect, or (c) ambiguous.
To test our hypothesis, we studied the interpretation of the two pointing behaviors in different contexts. Assuming our conjecture and a significance level of 0.05, a sample of 28 people in each condition is enough to detect our effect with a 95% power. Participants are asked to report judgments on the interpretation of the pointing action in each class. Each participant undertakes two trials from each class. The range of different cases are described below. Overall, the data collection in this study involved over 7,290 responses to robot pointing actions.
Experiments ::: Experimental Conditions
We used our experiment setup to generate videos and images from the simulation for a range of different conditions.
Experiments ::: Experimental Conditions ::: Referential vs Locating
In this condition, to reduce the chances of possible ambiguities, we place only one mug is on the table. The Baxter robot points its right arm to the mug and then points to its final position, accompanied by a synchronized verbal cue, “Put that there.”
We keep the motion identical across all the trials in this method. We introduce a variability in the initial position of the mug by sampling 8 random positions within conic sections subtending $45^{\circ } , 67.5^{\circ }, $ and $90^{\circ }$ on the surface of the table. New videos are generated for each such position of the mug. This way we can measure how flexible subjects are to the variation of the initial location of the referent object.
To test the effect for the locating pointing action, we test similarly sampled positions around the final pointed location, and display these realizations of the mug as the result images to subjects, while the initial position of the mug is kept perfectly situated.
A red cube that is in the gesture space of the robot, and is about twice as big as the mug is placed on the other side of the table as a visual guide for the subjects to see how objects can be placed on the table. We remove the tablet that is attached to Baxter's head for our experiments.
Effect of speech In order to test the effect of speech on the disparity between the kinds of pointing actions, a set of experiments were designed under the Referential vs Locating method with and without any speech. All subsequent methods will include verbal cues during their action execution. These cues are audible in the video.
Experiments ::: Experimental Conditions ::: Reverse Task
One set of experiments are run for the pick-and-place task with the initial and final positions of the object flipped during the reverse task. As opposed to the first set of experiments, the robot now begins by pointing to an object in the middle of the table, and then to an area areas towards the table's edge, i.e., the pick and place positions of the object are `reversed'.
The trials are meant to measure the sensitivity of the subjects in pick trials to the direction of the pointing gestures and to the absolute locations that the subjects thought the robot was pointing at.
This condition is designed to be identical to the basic Referential vs Locating study, except for the direction of the action. The motions are still executed on the Baxter's right arm.
Experiments ::: Experimental Conditions ::: Different Robotic Arm
In order to ensure that the results obtained in this study are not dependent on the choice of the robotic platform or its visual appearance, a second robot—a singly armed industrial Kuka manipulator—is also evaluated in a Referential vs Locating study (shown in Figure FIGREF6).
Experiments ::: Experimental Conditions ::: Cluttered Scene
To study how the presence of other objects would change the behavior of referential pointing, we examine the interpretation of the pointing actions when there is more than one mug on the table. Given the instructions to the subjects, both objects are candidate targets. This experiment allows the investigation of the effect of a distractor object in the scene on referential pointing.
We start with a setup where there are two mugs placed on the table (similar to the setup in Figure FIGREF14). One is a target mug placed at position $x_{\textit {object}}$ and a distractor mug at position $x_{\textit {distractor}}$. With the robot performing an initial pointing action to a position $x_{\textit {init}}$ on the table. Both the objects are sampled around $x_{\textit {init}}$ along the diametric line of the conic section arising from increasing cone angles of $45^\circ , 67.5^\circ , $ and $90^\circ $, where the separation of $x_{\textit {object}}$, and $x_{\textit {distractor}}$ is equal to the length of the diameter of the conic section, $D$. The objects are then positioned on the diametric line with a random offset between $[-\frac{D}{2}, \frac{D}{2}]$ around $x_{\textit {init}}$ and along the line. This means that the objects are at various distances apart, and depending upon the offset, one of the objects is nearer to the pointing action. The setup induces that the nearer mug serves as the object, and the farther one serves as the distractor. The motions are performed on the Baxter's right arm. The camera perspective in simulation is set to be facing into the pointing direction. The subjects in this trial are shown images of the instant of the referential pointing action.
Experiments ::: Experimental Conditions ::: Natural vs Unnatural scene
In this condition we study how the contextual and physical understanding of the world impacts the interpretation of pointing gestures. We generate a scenario for locating pointing in which the right arm of the Baxter points to a final placement position for the cuboidal object on top of a stack of cuboidal objects but towards the edge which makes it physically unstable. The final configurations of the object (Figure FIGREF17) shown to the users were a) object lying on top of the stack b) object in the unstable configuration towards the edge of the stack and c) object at the bottom of the stack towards one side. New videos are generated for each scenario along with verbal cues.
The pointing action, as well as the objects of interest, stay the identical between the natural, and unnatural trials. The difference lies in other objects in the scene that could defy gravity and float in the unnatural trials. The subjects were given a text-based instruction at the beginning of an unnatural trial saying they were seeing a scene where “gravity does not exist.”
Experiments ::: Experimental Conditions ::: Different verbs
To test if the effect is specific to the verb put, we designed a control condition where everything remained the same as the Referential vs Locating trials except the verb put which we replaced with place, move and push. Here again we collect 30 data points for each sampled $x^*$.
Analysis ::: Referential vs Locating
We study how varying the target of the pointing action from a referent object to a part of the space changes the interpretation of the pointing action by comparing the interpretation of the position of the pointing action $x^*$ in each condition.
Figure FIGREF19 shows the results of the experiment. The plot shows the spread of correct, incorrect, ambiguous responses over the sampled positions about the location of referential vs locating pointing actions. The referential data demonstrates the robustness of the interpretation. Most of the responses were overwhelmingly correct, for both robots, in interpreting a referent object in the pick part of a pick-and-place task. The locating pointing shows a much higher sensitivity to an accuracy of $x^*$ with respect to the true final placement. This comes up as a larger incidence of incorrect and ambiguous responses from the human subjects. This trend is true for the reverse trial as well.
While the study attempts to separate out and measure the critical aspects of the interpretation of robotic pointing actions some ambiguities like those arising out of perspective of the camera being projected onto a simulated 2D video or image are unavoidable. We suspect that the observed stretch of correct responses in spatial trials is due to perspective.
To test our hypothesis that Referential pointing is interpreted less precisely than Locating pointing we performed a Chi-squared test and compared the proportion of correct, incorrect and ambiguous responses in referential and spatial trials. The results of the test shows that these two classes are statistically significantly different ($\chi ^2= 13.89, p = 0.00096$).
To study if we are observing the same effects in the results of the reverse trial, no speech trial and the Kuka trial, we ran an equivalence test following the two one-sided tests method as described in BIBREF24, where each test is a pooled $z$-test with no continuity correction with a significance level of 0.05. We found changing the robot, removing the speech and changing the direction of the pointing action to make no difference in the interpretation of locating pointing and referential pointing within any margin that is less than 5%.
Analysis ::: Natural vs Unnatural
As shown in Table TABREF21 we observed in the natural scene, when the end-effector points towards the edge of the cube that is on top of the stack, subjects place the new cube on top of the stack or on the table instead of the edge of the cube. However, in the unnatural scene, when we explain to subjects that there is no gravity, a majority agree with the final image that has the cube on the edge. To test if this difference is statistically significant, we use the Fisher exact test BIBREF25. The test statistic value is $0.0478$. The result is significant at $p < 0.05$.
Analysis ::: Different verbs
The results of the Chi-squared test shows that in spatial trials when we replace put with place, push and move, the differences of the distributions of correct, incorrect and ambiguous responses are not statistically significant ($\chi =0.2344 $, $p = 0.971$). The coefficients of the multinomial logistic regression model and the $p$-values also suggest that the differences in judgements with different verbs are not statically significant ($b<0.0001$ , $p>0.98$).
Analysis ::: Cluttered
The data from these trials show how human subjects select between the two candidate target objects on the table. Since the instructions do not serve to disambiguate the target mug, the collected data show what the observers deemed as the correct target. Figure FIGREF24 visualizes subjects' responses across trials. The location of each pie uses the $x$-axis to show how much closer one candidate object is to the pointing target than the other, and uses the $y$-axis to show the overall imprecision of pointing. Each pie in Figure FIGREF24 shows the fraction of responses across trials that recorded the nearer (green) mug as correct compared to the farther mug (red). The white shaded fractions of the pies show the fraction of responses where subjects found the gesture ambiguous.
As we can see in Figure FIGREF24, once the two objects are roughly equidistant the cups from the center of pointing (within about 10cm), subjects tend to regard the pointing gesture as ambiguous, but as this distance increases, subjects are increasingly likely to prefer the closer target. In all cases, wherever subjects have a preference for one object over the other, they subjects picked the mug that was the nearer target of the pointing action more often than the further one.
Human Evaluation of Instructions
After designing and conducting our experiments, we became concerned that subjects might regard imprecise referential pointing as understandable but unnatural. If they did, their judgments might combine ordinary interpretive reasoning with additional effort, self-consciousness or repair. We therefore added a separate evaluation to assess how natural the generated pointing actions and instructions are. We recruited 480 subjects from Mechanical Turk using the same protocol described in our Data Collection procedure, and asked them to rank how natural they regarded the instruction on a scale of 0 to 5.
The examples were randomly sampled from the videos of the referential pointing trials that we showed to subjects for both the Baxter and Kuka robots. These examples were selected in a way that we obtained equal number of samples from each cone. The average rating for samples from the 45, ${67.5}$ and 90 cone are $3.625, 3.521$ and $3.650$ respectively. For Kuka, the average rating for samples from the 45, ${67.5}$ and 90 cone are $3.450, 3.375$, and $3.400$. Overall, the average for Baxter is $3.600$, and for Kuka is $3.408$. The differences between Kuka and Baxter and the differences across cones are not statistically significant ($t \le |1.07|, p > 0.1 $). Thus we have no evidence that subjects regard imprecise pointing as problematic.
Design Principles
The results of the experiments suggest that locating pointing is interpreted rather precisely, where referential pointing is interpreted relatively flexibly. This naturally aligns with the possibility for alternative interpretations. For spatial reference, any location is a potential target. By contrast, for referential pointing, it suffices to distinguish the target object from its distractors.
We can characterize this interpretive process in formal terms by drawing on observations from the philosophical and computational literature on vagueness BIBREF26, BIBREF27, BIBREF28. Any pointing gesture starts from a set of candidate interpretations $D \subset \mathcal {W}$ determined by the context and the communicative goal. In unconstrained situations, locating pointing allows a full set of candidates $D = \mathcal {W}.$ If factors like common-sense physics impose task constraints, that translates to restrictions on feasible targets $CS$, leading to a more restricted set of candidates $D = CS \cap \mathcal {W}$. Finally, for referential pointing, the potential targets are located at $x_1 \ldots x_N \in S$, and $D = \lbrace x_1 \ldots x_N \rbrace .$
Based on the communicative setting, we know that the pointing gesture, like any vague referring expression, must select at least one of the possible interpretations BIBREF28. We can find the best interpretation by its distance to the target $x^*$ of the pointing gesture. Using $d(x,x^*)$ to denote this distance, gives us a threshold
Vague descriptions can't be sensitive to fine distinctions BIBREF27. So if a referent at $\theta $ is close enough to the pointing target, then another at $\theta + \epsilon $ must be close enough as well, for any value of $\epsilon $ that is not significant in the conversational context. Our results suggest that viewers regard 10cm (in the scale of the model simulation) as an approximate threshold for a significant difference in our experiments.
In all, we predict that a pointing gesture is interpreted as referring to $\lbrace x \in D | d(x,x^*) \le \theta + \epsilon \rbrace .$ We explain the different interpretations through the different choice of $D$.
Design Principles ::: Locating Pointing
For unconstrained locating pointing, $x^* \in D$, so $\theta =0$. That means, the intended placement cannot differ significantly from the pointing target. Taking into account common sense, we allow for small divergence that connects the pointing, for example, to the closest stable placement.
Design Principles ::: Referential Pointing
For referential pointing, candidates play a much stronger role. A pointing gesture always has the closest object to the pointing target as a possible referent. However, ambiguities arise when the geometries of more than one object intersect with the $\theta +\epsilon $-neighborhood of $x^*$. We can think of that, intuitively, in terms of the effects of $\theta $ and $\epsilon $. Alternative referents give rise to ambiguity not only when they are too close to the target location ($\theta $) but even when they are simply not significantly further away from the target location ($\epsilon $).
Conclusion and Future Work
We have presented an empirical study of the interpretation of simulated robots instructing pick-and-place tasks. Our results show that robots can effectively combine pointing gestures and spoken instructions to communicate both object and spatial information. We offer an empirical characterization—the first, to the best of the authors' knowledge—of the use of robot gestures to communicate precise spatial locations for placement purposes. We have suggested that pointing, in line with other vague references, give rise to a set of candidate interpretations that depend on the task, context and communicative goal. Users pick the interpretations that are not significantly further from the pointing ray than the best ones. This contrasts with previous models that required pointing gestures to target a referent exactly or fall within a context-independent pointing cone.
Our work has a number of limitations that suggest avenues for future work. It remains to implement the design principles on robot hardware, explore the algorithmic process for generating imprecise but interpretable gestures, and verify the interpretations of physically co-present viewers. Note that we used a 2D interface, which can introduce artifacts, for example from the effect of perspective. In addition, robots can in general trade off pointing gestures with other descriptive material in offering instructions. Future work is needed to assess how such trade-offs play out in location reference, not just in object reference.
More tight-knit collaborative scenarios need to be explored, including ones where multiple pick-and-place tasks can be composed to communicate more complex challenges and ones where they involve richer human environments. Our study of common sense settings opens up intriguing avenues for such research, since it suggests ways to take into account background knowledge and expectations to narrow down the domain of possible problem specifications in composite tasks like “setting up a dining table.”
While the current work studies the modalities of pointing and verbal cues, effects of including additional robotic communication in the form of heads-up displays or simulated eye-gaze would be other directions to explore. Such extensions would require lab experiments with human subjects and a real robot. This is the natural next step of our work.
Acknowledgments
The research presented here is supported by NSF Awards IIS-1526723, IIS-1734492, IIS-1723869 and CCF-1934924. Thanks to the anonymous reviewers for helpful comments. We would also like to thank the Mechanical Turk participants for their contributions. | Unanswerable |
ad0a7fe75db5553652cd25555c6980f497e08113 | ad0a7fe75db5553652cd25555c6980f497e08113_0 | Q: How does the model compute the likelihood of executing to the correction semantic denotation?
Text: Introduction
Semantic parsing is the task of converting natural language utterances into machine-understandable meaning representations or logical forms. The task has attracted much attention in the literature due to a wide range of applications ranging from question answering BIBREF0 , BIBREF1 to relation extraction BIBREF2 , goal-oriented dialog BIBREF3 , and instruction understanding BIBREF4 , BIBREF5 , BIBREF6 .
In a typical semantic parsing scenario, a logical form is executed against a knowledge base to produce an outcome (e.g., an answer) known as denotation. Conventional semantic parsers are trained on collections of utterances paired with annotated logical forms BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . However, the labeling of logical forms is labor-intensive and challenging to elicit at a large scale. As a result, alternative forms of supervision have been proposed to alleviate the annotation bottleneck faced by semantic parsing systems. One direction is to train a semantic parser in a weakly-supervised setting based on utterance-denotation pairs BIBREF11 , BIBREF12 , BIBREF2 , BIBREF13 , since such data are relatively easy to obtain via crowdsourcing BIBREF14 .
However, the unavailability of logical forms in the weakly-supervised setting, renders model training more difficult. A fundamental challenge in learning semantic parsers from denotations is finding consistent logical forms, i.e., those which execute to the correct denotation. This search space can be very large, growing exponentially as compositionality increases. Moreover, consistent logical forms unavoidably introduce a certain degree of spuriousness — some of them will accidentally execute to the correct denotation without reflecting the meaning of the utterance. These spurious logical forms are misleading supervision signals for the semantic parser.
In this work we introduce a weakly-supervised neural semantic parsing system which aims to handle both challenges. Our system, shown in Figure 1 , mainly consists of a sequence-to-tree parser which generates candidate logical forms for a given utterance. These logical forms are subsequently ranked by two components: a log-linear model scores the likelihood of each logical form executing to the correct denotation, and an inverse neural parser measures the degree to which the logical form represents the meaning of the utterance. We present a scheduled training scheme which balances the contribution of the two components and objectives. To further boost performance, we propose to neurally encode a lexicon, as a means of injecting prior domain knowledge to the neural parameters.
We evaluate our system on three Freebase datasets which consist of utterance denotation pairs: WebQuestions BIBREF14 , GraphQuestions BIBREF15 , and Spades BIBREF16 . Experimental results across datasets show that our weakly-supervised semantic parser achieves state-of-the-art performance.
The Neural Parser-Ranker
Conventional weakly-supervised semantic parsers BIBREF17 consist of two major components: a parser, which is chart-based and non-parameterized, recursively builds derivations for each utterance span using dynamic programming. A learner, which is a log-linear model, defines features useful for scoring and ranking the set of candidate derivations, based on the correctness of execution results. As mentioned in liang2016learning, the chart-based parser brings a disadvantage since it does not support incremental contextual interpretation. The dynamic programming algorithm requires that features of a span are defined over sub-derivations in that span.
In contrast to a chart-based parser, a parameterized neural semantic parser decodes logical forms with global utterance features. However, training a weakly-supervised neural parser is challenging since there is no access to gold-standard logical forms for backpropagation. Besides, it should be noted that a neural decoder is conditionally generative: decoding is performed greedily conditioned on the utterance and the generation history—it makes no use of global logical form features. In this section, we introduce a parser-ranker framework which combines the best of conventional and neural approaches in the context of weakly-supervised semantic parsing.
Parser
Our work follows cheng2017learning,cheng2017learning2 in using LISP-style functional queries as the logical formulation. Advantageously, functional queries are recursive, tree-structured and can naturally encode logical form derivations (i.e., functions and their application order). For example, the utterance “who is obama's eldest daughter” is simply represented with the function-argument structure argmax(daughterOf(Obama), ageOf). Table 1 displays the functions we use in this work; a more detailed specifications can be found in the appendix.
To generate logical forms, our system adopts a variant of the neural sequence-to-tree model proposed in cheng2017learning. During generation, the prediction space is restricted by the grammar of the logical language (e.g., the type and the number of arguments required by a function) in order to ensure that output logical forms are well-formed and executable. The parser consists of a bidirectional LSTM BIBREF18 encoder and a stack-LSTM BIBREF19 decoder, introduced as follows.
The bidirectional LSTM encodes a variable-length utterance $x=(x_1, \cdots , x_n)$ into a list of token representations $[h_1, \cdots , h_n]$ , where each representation is the concatenation of the corresponding forward and backward LSTM states.
After the utterance is encoded, the logical form is generated with a stack-LSTM decoder. The output of the decoder consists of functions which generate the logical form as a derivation tree in depth-first order. There are three classes of functions:
Class-1 functions generate non-terminal tree nodes. In our formulation, non-terminal nodes include language-dependent functions such as count and argmax, as described in the first four rows of Table 1 . A special non-terminal node is the relation placeholder relation.
Class-2 functions generate terminal tree nodes. In our formulation, terminal nodes include the relation placeholder relation and the entity placeholder entity.
Class-3 function reduce completes a subtree. Since generation is performed in depth-first order, the parser needs to identify when the generation of a subtree completes, i.e., when a function has seen all its required arguments.
The functions used to generate the example logical form argmax(daughterOf(Obama), ageOf) are shown in Figure 2 . The stack-LSTM makes two types of updates based on the functions it predicts:
Update-1: when a Class-1 or Class-2 function is called, a non-terminal or terminal token $l_t$ will be generated, At this point, the stack-LSTM state, denoted by $g_t$ , is updated from its older state $g_{t-1}$ as in an ordinary LSTM:
$$g_t = \textnormal {LSTM} (l_t, g_{t-1})$$ (Eq. 11)
The new state is additionally pushed onto the stack marking whether it corresponds to a non-terminal or terminal.
Update-2: when the reduce function is called (Class-3), the states of the stack-LSTM are recursively popped from the stack until a non-terminal is encountered. This non-terminal state is popped as well, after which the stack-LSTM reaches an intermediate state denoted by $g_{t-1:t}$ . At this point, we compute the representation of the completed subtree $z_t$ as:
$$z_t = W_z \cdot [p_z : c_z]$$ (Eq. 13)
where $p_z$ denotes the parent (non-terminal) embedding of the subtree, and $c_z$ denotes the average embedding of the children (terminals or already-completed subtrees). $W_z $ is the weight matrix. Finally, $z_t$ serves as input for updating $g_{t-1:t}$ to $g_t$ :
$$g_t = \textnormal {LSTM} (z_t, g_{t-1:t})$$ (Eq. 14)
At each time step of the decoding, the parser first predicts a subsequent function $f_{t+1}$ conditioned on the decoder state $g_t$ and the encoder states $h_1 \cdots h_n$ . We apply standard soft attention BIBREF20 between $g_t$ and the encoder states $h_1 \cdots h_n$ to compute a feature representation $\bar{h}_t $ :
$$u_t^i = V \tanh (W_h h_i + W_g g_t)$$ (Eq. 16)
$$a_t^i = \textnormal {softmax} (u_t^i )$$ (Eq. 17)
where $V$ , $W_h$ , and $W_g$ are all weight parameters. The prediction of the function $f_{t+1}$ is computed with a softmax classifier, which takes the concatenated features $\bar{h}_t $ and $g_t$ as input:
$$f_{t+1} \sim \textnormal {softmax} ( W_{y} \tanh ( W_f [\bar{h}_t, g_t] ) )$$ (Eq. 19)
where $W_y$ and $W_f$ are weight parameters. When $f_{t+1}$ is a language-dependent function (first four rows in Table 1 , e.g., argmax), it is directly used as a non-terminal token $l_{t+1}$ to construct the logical form. However, when $f_{t+1}$ is a relation or entity placeholder, we further predict the specific relation or entity $l_{t+1}$ with another set of neural parameters:
$$l_{t+1} \sim \textnormal {softmax} ( W_{y^{\prime }} \tanh ( W_{l} [\bar{h}_t, g_t] ) )$$ (Eq. 20)
where $W_{y^{\prime }}$ and $W_{l^{\prime }}$ are weight matrices.
Note that in the weakly supervised setting, the parser decodes a list of candidate logical forms $Y$ with beam search, instead of outputting the most likely logical form $y$ . During training, candidate logical forms are executed against a knowledge base to find those which are consistent (denoted by $Y_c(x)$ ) and lead to the correct denotation. Then, the parser is trained to maximize the total log likelihood of these consistent logical forms:
$$\begin{split} & \sum _{y \in Y_c(x)} \log p(y|x) = \\ & \sum _{y \in Y_c(x)} \log p(f_1,\cdots , f_k, l_1, \cdots , l_o|x) \end{split}$$ (Eq. 21)
where $k$ denotes the number of functions used to generate the logical form, and $o$ (smaller than $k$ ) denotes the number of tree nodes in the logical form.
Ranker
It is impractical to rely solely on a neural decoder to find the most likely logical form at run time in the weakly-supervised setting. One reason is that although the decoder utilizes global utterance features for generation, it cannot leverage global features of the logical form since a logical form is conditionally generated following a specific tree-traversal order. To this end, we follow previous work BIBREF21 and introduce a ranker to the system. The role of the ranker is to score the candidate logical forms generated by the parser; at test time, the logical form receiving the highest score will be used for execution. The ranker is a discriminative log-linear model over logical form $y$ given utterance $x$ :
$$\log _\theta p(y|x) = \frac{\exp (\phi (x, y)^T \theta )}{\sum _{y^{\prime } \in Y(x)} \exp (\phi (x, y^{\prime })^T \theta )}$$ (Eq. 23)
where $Y(x)$ is the set of candidate logical forms; $\phi $ is the feature function that maps an utterance-logical form pair onto a feature vector; and $\theta $ denotes the weight parameters of the model.
Since the training data consists only of utterance-denotation pairs, the ranker is trained to maximize the log-likelihood of the correct answer $z$ by treating logical forms as a latent variable:
$$\log p(z|x) = \log \sum _{y \in Y_c(x)} p(y|x) p(z|x,y)$$ (Eq. 24)
where $Y_c(x)$ denotes the subset of candidate logical forms which execute to the correct answer; and $p(z|x,y)$ equates to 1 in this case.
Training of the neural parser-ranker system involves the following steps. Given an input utterance, the parser first generates a list of candidate logical forms via beam search. The logical forms are then executed and those which yield the correct denotation are marked as consistent. The parser is trained to optimize the total likelihood of consistent logical forms (Equation ( 21 )), while the ranker is trained to optimize the marginal likelihood of denotations (Equation ( 24 )). The search space can be further reduced by performing entity linking which restricts the number of logical forms to those containing only a small set of entities.
Handling Spurious Logical Forms
The neural parser-ranker system relies on beam search to find consistent logical forms that execute to the correct answer. These logical forms are then used as surrogate annotations and provide supervision to update the parser's parameters. However, some of these logical forms will be misleading training signals for the neural semantic parser on account of being spurious: they coincidentally execute to the correct answer without matching the utterance semantics.
In this section we propose a method of removing spurious logical forms by validating how well they match the utterance meaning. The intuition is that a meaning-preserving logical form should be able to reconstruct the original utterance with high likelihood. However, since spurious logical forms are not annotated either, a direct maximum likelihood solution does not exist. To this end, we propose a generative model for measuring the reconstruction likelihood.
The model assumes utterance $x$ is generated from the corresponding logical form $y$ , and only the utterance is observable. The objective is therefore to maximize the log marginal likelihood of $x$ :
$$\log p(x) = \log \sum _y p(x, y)$$ (Eq. 25)
We adopt neural variational inference BIBREF22 to solve the above objective, which is equivalent to maximizing an evidence lower bound:
$$\begin{split} \log p(x) & = \log \frac{q(y|x) p(x|y) p(y)}{q(y|x)} \\ & \ge \mathbb {E}_{q(y|x)} \log p(x|y) + \mathbb {E}_{q(y|x)} \log \frac{p(y)}{q(y|x)} \\ \end{split}$$ (Eq. ) $ \vspace*{-22.76228pt} $
Since our semantic parser always outputs well-formed logical forms, we assume a uniform constant prior $p(y)$ . The above objective can be thus reduced to:
$$\hspace*{-9.38945pt}\mathbb {E}_{q(y|x)} \log p(x|y) - \mathbb {E}_{q(y|x)} \log q(y|x) = \mathcal {L}(x)$$ (Eq. 27)
where the first term computes the reconstruction likelihood $p(x|y)$ ; and the second term is the entropy of the approximated posterior $q(y|x) $ for regularization. Specifically, we use the semantic parser to compute the approximated posterior $q(y|x)$ . The reconstruction likelihood $p(x|y)$ is computed with an inverse parser which recovers utterance $x$ from its logical form $y$ . We use $p(x|y)$ to measure how well the logical form reflects the utterance meaning; details of the inverse parser are described as follows.
Scheduled Training
Together with the inverse parser for removing spurious logical forms, the proposed system consists of three components: a parser which generates logical forms from an utterance, a ranker which measures the likelihood of a logical form executing to the correct denotation, and an inverse parser which measures the degree to which logical forms are meaning-preserving using reconstruction likelihood. Our semantic parser is trained following a scheduled training procedure, balancing the two objectives.
Neural Lexicon Encoding
In this section we further discuss how the semantic parser presented so far can be enhanced with a lexicon. A lexicon is essentially a coarse mapping between natural language phrases and knowledge base relations and entities, and has been widely used in conventional chart-based parsers BIBREF14 , BIBREF23 . Here, we show how a lexicon (either hard-coded or statistically-learned BIBREF24 ) can be used to benefit a neural semantic parser.
The central idea is that relations or entities can be viewed as a single-node tree-structured logical form. For example, based on the lexicon, the natural language phrase “is influenced by” can be parsed to the logical form influence.influence_node.influenced_by. We can therefore pretrain the semantic parser (and the inverse parser) with these basic utterance-logical form pairs which act as important prior knowledge for initializing the distributions $q(y|x)$ and $p(x|y)$ . With pre-trained word embeddings capturing linguistic regularities on the natural language side, we also expect the approach to help the neural model generalize to unseen natural language phrases quickly. For example, by encoding the mapping between the natural language phrase “locate in” and the Freebase predicate fb:location.location.containedby, the parser can potentially link the new phrase “located at” to the same predicate. We experimentally assess whether the neural lexicon enhances the performance of our semantic parser.
Experiments
In this section we evaluate the performance our semantic parser. We introduce the various datasets used in our experiments, training settings, model variants used for comparison, and finally present and analyze our results.
Datasets
We evaluated our model on three Freebase datasets: WebQuestions BIBREF14 , GraphQuestions BIBREF15 and Spades BIBREF16 . WebQuestions contains 5,810 real questions asked by people on the web paired by answers. GraphQuestions contains 5,166 question-answer pairs which were created by showing 500 Freebase graph queries to Amazon Mechanical Turk workers and asking them to paraphrase them into natural language. Spades contains 93,319 question-answer pairs which were created by randomly replacing entities in declarative sentences with a blank symbol.
Training
Across training regimes, the dimensions of word vector, logical form token vector, and LSTM hidden states (for the semantic parser and the inverse parser) are 50, 50, and 150, respectively. Word embeddings were initialized with Glove embeddings BIBREF25 . All other embeddings were randomly initialized. We used one LSTM layer in the forward and backward directions. Dropout was used before the softmax activation (Equations ( 19 ), ( 20 ), and ( 34 )). The dropout rate was set to 0.5. Momentum SGD BIBREF26 was used as the optimization method to update the parameters of the model.
As mentioned earlier, we use entity linking to reduce the beam search space. Entity mentions in Spades are automatically annotated with Freebase entities BIBREF27 . For WebQuestions and GraphQuestions we perform entity linking following the procedure described in BIBREF28 . We identify potential entity spans using seven handcrafted part-of-speech patterns and associate them with Freebase entities obtained from the Freebase/KG API. We use a structured perceptron trained on the entities found in WebQuestions and GraphQuestions to select the top 10 non-overlapping entity disambiguation possibilities. We treat each possibility as a candidate entity and construct candidate utterances with a beam search of size 300.
Key features of the log-linear ranker introduced in Section "Parser " include the entity score returned by the entity linking system, the likelihood score of the relation in the logical form predicted by the parser, the likelihood score of the the logical form predicted by the parser, the embedding similarity between the relation in the logical form and the utterance, the similarity between the relation and the question words in the utterance, and the answer type as indicated by the last word in the Freebase relation BIBREF29 . All features are normalized across candidate logical forms. For all datasets we use average F1 BIBREF14 as our evaluation metric.
Model Variants
We experiment with three variants of our model. We primarily consider the neural parser-ranker system (denoted by npr) described in Section "Parser " which is trained to maximize the likelihood of consistent logical forms. We then compare it to a system augmented with a generative ranker (denoted by granker), introducing the second objective of maximizing the reconstruction likelihood. Finally, we examine the impact of neural lexicon encoding when it is used for the generative ranker, and also when it is used for the entire system.
Results
Experimental results on WebQuestions are shown in Table 2 . We compare the performance of npr with previous work, including conventional chart-based semantic parsing models (e.g., berant-EtAl:2013:EMNLP; first block in Table 2 ), information extraction models (e.g., yao2014information; second block in Table 2 ), and more recent neural question-answering models (e.g., dong2015question; third block in Table 2 ). Most neural models do not generate logical forms but instead build a differentiable network to solve a specific task such as question-answering. An exception is the neural sequence-to-tree model of cheng2017learning, which we extend to build the vanilla npr model. A key difference of npr is that it employs soft attention instead of hard attention, which is cheng2017learning use to rationalize predictions.
As shown in Table 2 , the basic npr system outperforms most previous chart-based semantic parsers. Our results suggest that neural networks are powerful tools for generating candidate logical forms in a weakly-supervised setting, due to their ability of encoding and utilizing sentential context and generation history. Compared to cheng2017learning, our system also performs better. We believe the reason is that it employs soft attention instead of hard attention. Soft attention makes the parser fully differentiable and optimization easier. The addition of the inverse parser ( $+$ granker) to the basic npr model yields marginal gains while the addition of the neural lexicon encoding to the inverse parser brings performance improvements over npr and granker. We hypothesize that this is because the inverse parser adopts an unsupervised training objective, which benefits substantially from prior domain-specific knowledge used to initialize its parameters. When neural lexicon encoding is incorporated in the semantic parser as well, system performance can be further improved. In fact, our final system (last row in Table 2 ) outperforms all previous models except that of xu2016question, which uses external Wikipedia resources to prune out erroneous candidate answers.
Tables 3 and 4 present our results on GraphQuestions and Spades, respectively. Comparison systems for GraphQuestions include two chart-based semantic parsers BIBREF14 , BIBREF30 , an information extraction model BIBREF31 , a neural sequence-to-tree model with hard attention BIBREF32 and a model based on universal dependency to logical form conversion BIBREF33 . On Spades we compare with the method of bisk2016evaluating which parses an utterance into a syntactic representation which is subsequently grounded to Freebase; and also with das2017question who employ memory networks and external text resources. Results on both datasets follow similar trends as in WebQuestions. The best performing npr variant achieves state-of-the-art results on GraphQuestions and it comes close to the best model on Spades without using any external resources.
One of the claims put forward in this paper is that the extended npr model reduces the impact of spurious logical forms during training. Table 5 highlights examples of spurious logical forms which are not semantically correct but are nevertheless assigned higher scores in the vanilla npr (red colour). These logical forms become less likely in the extended npr, while the scores of more semantically faithful representations (blue colour) are boosted.
Discussion
The vanilla npr model is optimized with consistent logical forms which lead to correct denotations. Although it achieves competitive results compared to chart-based parsers, the training of this model can be misled by spurious logical forms. The introduction of the inverse parser aims to alleviate the problem by scoring how a logical form reflects the utterance semantics. Although the inverse parser is not directly used to rank logical forms at test time, the training objective it adopts encourages the parser to generate meaning-preserving logical forms with higher likelihood. These probabilities are used as features in the log-linear ranker, and therefore the inverse parser affects the ranking results, albeit implicitly.
However, we should point out that the unsupervised training objective is relatively difficult to optimize, since there are no constraints to regularize the latent logical forms. This motivates us to develop a scheduled training procedure; as our results show, when trained properly the inverse parser and the unsupervised objective bring performance gains. Moreover, the neural lexicon encoding method we applied essentially produces synthetic data to further regularize the latent space.
Related Work
Various types of supervision have been explored to train semantic parsers. Early semantic parsers have used annotated training data consisting of sentences and their corresponding logical forms BIBREF35 , BIBREF36 , BIBREF37 , BIBREF10 . In order to scale semantic parsing to open-domain problems, weakly-supervised semantic parsers are trained on utterance-denotation pairs BIBREF1 , BIBREF2 , BIBREF21 , BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , BIBREF33 . Most previous work employs a chart-based parser to produce logical forms from a grammar which combines domain-general aspects with lexicons.
Recently, neural semantic parsing has attracted a great deal of attention. Previous work has mostly adopted fully-supervised, sequence-to-sequence models to generate logical form strings from natural language utterances BIBREF42 , BIBREF43 , BIBREF44 . Other work explores the use of reinforcement learning to train neural semantic parsers from question-answer pairs BIBREF45 or from user feedback BIBREF46 . More closely related to our work, goldman2018weakly adopt a neural semantic parser and a discriminative ranker to solve a visual reasoning challenge. They attempt to alleviate the search space and spuriousness challenges with abstractive examples. yin2018structvae adopt a tree-based variational autoencoder for semi-supervised semantic parsing. Neural variational inference has also been used in other NLP tasks including relation discovery BIBREF47 , sentence compression BIBREF48 , and parsing BIBREF49 .
Conclusions
In this work we proposed a weakly-supervised neural semantic parsing system trained on utterance-denotation pairs. The system employs a neural sequence-to-tree parser to generate logical forms for a natural language utterance. The logical forms are subsequently ranked with two components and objectives: a log-linear model which scores the likelihood of correct execution, and a generative neural inverse parser which measures whether logical forms are meaning preserving. We proposed a scheduled training procedure to balance the two objectives, and a neural lexicon encoding method to initialize model parameters with prior knowledge. Experiments on three semantic parsing datasets demonstrate the effectiveness of our system. In the future, we would like to train our parser with other forms of supervision such as feedback from users BIBREF50 , BIBREF46 or textual evidence BIBREF51 . | By treating logical forms as a latent variable and training a discriminative log-linear model over logical form y given x. |
f268b70b08bd0436de5310e390ca5f38f7636612 | f268b70b08bd0436de5310e390ca5f38f7636612_0 | Q: Which conventional alignment models do they use as guidance?
Text: Introduction
Neural Machine Translation (NMT) has achieved great successes on machine translation tasks recently BIBREF0 , BIBREF1 . Generally, it relies on a recurrent neural network under the Encode-Decode framework: it firstly encodes a source sentence into context vectors and then generates its translation token-by-token, selecting from the target vocabulary. Among different variants of NMT, attention based NMT, which is the focus of this paper, is attracting increasing interests in the community BIBREF0 , BIBREF2 . One of its advantages is that it is able to dynamically make use of the encoded context through an attention mechanism thereby allowing the use of fewer hidden layers while still maintaining high levels of translation performance.
An attention mechanism is designed to predict the alignment of a target word with respect to source words. In order to facilitate incremental decoding, it tries to make this alignment prediction without any information about the target word itself, and thus this attention can be considered to be a form of a reordering model (see § SECREF2 for more details). However, it differs from conventional alignment models that are able to use the target word to infer its alignments BIBREF3 , BIBREF4 , BIBREF5 , and as a result there is a substantial gap in quality between the alignments derived by this attention based NMT and conventional alignment models (54 VS 30 in terms of AER for Chinese-to-English as reported in BIBREF6 ). This discrepancy might be an indication that the potential of NMT is limited. In addition, the attention in NMT is learned in an unsupervised manner without explicit prior knowledge about alignment. In contrast, in conventional statistical machine translation (SMT), it is standard practice to learn reordering models in a supervised manner with the guidance from conventional alignment models.
Inspired by the supervised reordering in conventional SMT, in this paper, we propose a Supervised Attention based NMT (SA-NMT) model. Specifically, similar to conventional SMT, we first run off-the-shelf aligners (GIZA++ BIBREF3 or fast_align BIBREF4 etc.) to obtain the alignment of the bilingual training corpus in advance. Then, treating this alignment result as the supervision of attention, we jointly learn attention and translation, both in supervised manners. Since the conventional aligners delivers higher quality alignment, it is expected that the alignment in the supervised attention NMT will be improved leading to better end-to-end translation performance. One advantage of the proposed SA-NMT is that it implements the supervision of attention as a regularization in the joint training objective (§3.2). Furthermore, since the supervision of attention lies in the middle of the entire network architecture rather than the top ( as in the supervision of translation (see Figure 1(b)), it serves to mitigate the vanishing gradient problem during the back-propagation BIBREF7 .
This paper makes the following contributions:
Revisiting Neural Machine Translation
Suppose INLINEFORM0 denotes a source sentence, INLINEFORM1 a target sentence. In addition, let INLINEFORM2 denote a prefix of INLINEFORM3 . Neural Machine Translation (NMT) directly maps a source sentence into a target under an encode-decode framework. In the encoding stage, it uses two bidirectional recurrent neural networks to encode INLINEFORM4 into a sequence of vectors INLINEFORM5 , with INLINEFORM6 representing the concatenation of two vectors for INLINEFORM7 source word from two directional RNNs. In the decoding stage, it generates the target translation from the conditional probability over the pair of sequences INLINEFORM8 and INLINEFORM9 via a recurrent neural network parametrized by INLINEFORM10 as follows: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 respectively denote an RNN hidden state (i.e. a vector) and a context vector at timestep INLINEFORM2 ; INLINEFORM3 is a transformation function mapping into a vector with dimension of the target vocabulary size; and INLINEFORM4 denotes the INLINEFORM5 component of a vector. Furthermore, INLINEFORM7 is defined by an activation function, i.e. a Gated Recurrent Unit BIBREF8 ; and the context vector INLINEFORM8 is a dynamical source representation at timestep INLINEFORM9 , and calculated as the weighted sum of source encodings INLINEFORM10 , i.e. INLINEFORM11 . Here the weight INLINEFORM12 implements an attention mechanism, and INLINEFORM13 is the alignment probability of INLINEFORM14 being aligned to INLINEFORM15 . INLINEFORM16 is derived through a feedforward neural network INLINEFORM17 as follows: DISPLAYFORM0
where INLINEFORM0 consists of two layers, the top one being a softmax layer. We skip the detailed definitions of INLINEFORM1 together with INLINEFORM2 , INLINEFORM3 and INLINEFORM4 , and refer the readers to BIBREF0 instead. Figure 1(a) shows one slice of computational graph for NMT definition at time step INLINEFORM9 .
To train NMT, the following negative log-likelyhood is minimized: DISPLAYFORM0
where INLINEFORM0 is a bilingual sentence pair from a given training corpus, INLINEFORM1 is as defined in Eq.( EQREF5 ). Note that even though the training is conducted in a supervised manner with respect to translation, i.e., INLINEFORM2 are observable in Figure 1(a), the attention is learned in a unsupervised manner, since INLINEFORM3 is hidden.
In Figure 1(a), INLINEFORM0 can not be dependent on INLINEFORM1 , as the target word INLINEFORM2 is unknown at the timestep INLINEFORM3 during the testing. Therefore, at timestep INLINEFORM4 , NMT firstly tries to calculate INLINEFORM5 , through which NMT figures out those source words will be translated next, even though the next target word INLINEFORM6 is unavailable. From this point of view, the attention mechanism plays a role in reordering and thus can be considered as a reordering model. Unlike this attention model, conventional alignment models define the alignment INLINEFORM7 directly over INLINEFORM8 and INLINEFORM9 as follows: INLINEFORM10
where INLINEFORM0 denotes either a log-probability INLINEFORM1 for a generative model like IBM models BIBREF9 or a feature function for discriminative models BIBREF5 . In order to infer INLINEFORM2 , alignment models can readily use the entire INLINEFORM3 , of course including INLINEFORM4 as well, thereby they can model the alignment between INLINEFORM5 and INLINEFORM6 more sufficiently. As a result, the attention based NMT might not deliver satisfying alignments, as reported in BIBREF6 , compared to conventional alignment models. This may be a sign that the potential of NMT is limited in end-to-end translation.
Supervised Attention
In this section, we introduce supervised attention to improve the alignment, which consequently leads to better translation performance for NMT. Our basic idea is simple: similar to conventional SMT, it firstly uses a conventional aligner to obtain the alignment on the training corpus; then it employs these alignment results as supervision to train the NMT. During testing, decoding proceeds in exactly the same manner as standard NMT, since there is no alignment supervision available for unseen test sentences.
Preprocessing Alignment Supervision
As described in §2, the attention model outputs a soft alignment INLINEFORM0 , such that INLINEFORM1 is a normalized probability distribution. In contrast, most aligners are typically oriented to grammar induction for conventional SMT, and they usually output `hard' alignments, such as BIBREF3 . They only indicate whether a target word is aligned to a source word or not, and this might not correspond to a distribution for each target word. For example, one target word may align to multiple source words, or no source words at all.
Therefore, we apply the following heuristics to preprocess the hard alignment: if a target word does not align to any source words, we inherit its affiliation from the closest aligned word with preference given to the right, following BIBREF10 ; if a target word is aligned to multiple source words, we assume it aligns to each one evenly. In addition, in the implementation of NMT, there are two special tokens `eol' added to both source and target sentences. We assume they are aligned to each other. In this way, we can obtain the final supervision of attention, denoted as INLINEFORM0 .
Jointly Supervising Translation and Attention
We propose a soft constraint method to jointly supervise the translation and attention as follows: DISPLAYFORM0
where INLINEFORM0 is as defined in Eq. ( EQREF5 ), INLINEFORM1 is a loss function that penalizes the disagreement between INLINEFORM2 and INLINEFORM3 , and INLINEFORM4 is a hyper-parameter that balances the preference between likelihood and disagreement. In this way, we treat the attention variable INLINEFORM5 as an observable variable as shown in Figure 1(b), and this is different from the standard NMT as shown in Figure 1(a) in essence. Note that this training objective resembles to that in multi-task learning BIBREF11 . Our supervised attention method has two further advantages: firstly, it is able to alleviate overfitting by means of the INLINEFORM6 ; and secondly it is capable of addressing the vanishing gradient problem because the supervision of INLINEFORM7 is more close to INLINEFORM8 than INLINEFORM9 as in Figure 1(b).
In order to quantify the disagreement between INLINEFORM0 and INLINEFORM1 , three different methods are investigated in our experiments:
Mean Squared Error (MSE) INLINEFORM0
MSE is widely used as a loss for regression tasks BIBREF12 , and it directly encourages INLINEFORM0 to be equal to INLINEFORM1 .
Multiplication (MUL) INLINEFORM0
MUL is particularly designed for agreement in word alignment and it has been shown to be effective BIBREF13 , BIBREF6 . Note that different from those in BIBREF6 , INLINEFORM0 is not a parametrized variable but a constant in this paper.
Cross Entropy (CE) INLINEFORM0
Since for each INLINEFORM0 , INLINEFORM1 is a distribution, it is natural to use CE as the metric to evaluate the disagreement BIBREF14 .
Experiments
We conducted experiments on two Chinese-to-English translation tasks: one is the NIST task oriented to NEWS domain, which is a large scale task and suitable to NMT; and the other is the speech translation oriented to travel domain, which is a low resource task and thus is very challenging for NMT. We used the case-insensitive BLEU4 to evaluate translation quality and adopted the multi-bleu.perl as its implementation.
The Large Scale Translation Task
We used the data from the NIST2008 Open Machine Translation Campaign. The training data consisted of 1.8M sentence pairs, the development set was nist02 (878 sentences), and the test sets are were nist05 (1082 sentences), nist06 (1664 sentences) and nist08 (1357 sentences).
We compared the proposed approach with three strong baselines:
Moses: a phrase-based machine translation system BIBREF15 ;
NMT1: an attention based NMT BIBREF0 system at https://github.com/lisa-groundhog/GroundHog;
NMT2: another implementation of BIBREF0 at https://github.com/nyu-dl/dl4mt-tutorial.
We developed the proposed approach based on NMT2, and denoted it as SA-NMT.
We followed the standard pipeline to run Moses. GIZA++ with grow-diag-final-and was used to build the translation model. We trained a 5-gram target language model on the Gigaword corpus, and used a lexicalized distortion model. All experiments were run with the default settings.
To train NMT1, NMT2 and SA-NMT, we employed the same settings for fair comparison. Specifically, except the stopping iteration which was selected using development data, we used the default settings set out in BIBREF0 for all NMT-based systems: the dimension of word embedding was 620, the dimension of hidden units was 1000, the batch size was 80, the source and target side vocabulary sizes were 30000, the maximum sequence length was 50, the beam size for decoding was 12, and the optimization was done by Adadelta with all hyper-parameters suggested by BIBREF16 . Particularly for SA-NMT, we employed a conventional word aligner to obtain the word alignment on the training data before training SA-NMT. In this paper, we used two different aligners, which are fast_align and GIZA++. We tuned the hyper-parameter INLINEFORM0 to be 0.3 on the development set, to balance the preference between the translation and alignment. Training was conducted on a single Tesla K40 GPU machine. Each update took about 3.0 seconds for both NMT2 and SA-NMT, and 2.4 seconds for NMT1. Roughly, it took about 10 days to NMT2 to finish 300000 updates.
We implemented three different losses to supervise the attention as described in §3.2. To explore their behaviors on the development set, we employed the GIZA++ to generate the alignment on the training set prior to the training SA-NMT. In Table TABREF21 , we can see that MUL is better than MSE. Furthermore, CE performs best among all losses, and thus we adopt it for the following experiments.
In addition, we also run fast_align to generate alignments as the supervision for SA-NMT and the results were reported in Table TABREF22 . We can see that GIZA++ performs slightly better than fast_align and thus we fix the external aligner as GIZA++ in the following experiments.
Figure FIGREF26 shows the learning curves of NMT2 and SA-NMT on the development set. We can see that NMT2 generally obtains higher BLEU as the increasing of updates before peaking at update of 150000, while it is unstable from then on. On the other hand, SA-NMT delivers much better BLEU for the beginning updates and performs more steadily along with the updates, although it takes more updates to reach the peaking point.
Table TABREF27 reports the main end-to-end translation results for the large scale task. We find that both standard NMT generally outperforms Moses except NMT1 on nist05. The proposed SA-NMT achieves significant and consistent improvements over all three baseline systems, and it obtains the averaged gains of 2.2 BLEU points on test sets over its direct baseline NMT2. It is clear from these results that our supervised attention mechanism is highly effective in practice.
As explained in §2, standard NMT can not use the target word information to predict its aligned source words, and thus might fail to predict the correct source words for some target words. For example, for the sentence in the training set in Figure FIGREF29 (a), NMT2 aligned `following' to `皮诺契特 (gloss: pinochet)' rather than `继 (gloss: follow)', and worse still it aligned the word `.' to `在 (gloss: in)' rather than `。' even though this word is relatively easy to align correctly. In contrast, with the help of information from the target word itself, GIZA++ successfully aligned both `following' and `.' to the expected source words (see Figure FIGREF29 (c)). With the alignment results from GIZA++ as supervision, we can see that our SA-NMT can imitate GIZA++ and thus align both words correctly. More importantly, for sentences in the unseen test set, like GIZA++, SA-NMT confidently aligned `but' and `.' to their correct source words respectively as in Figure FIGREF29 (b), where NMT2 failed. It seems that SA-NMT can learn its alignment behavior from GIZA++, and subsequently apply the alignment abilities it has learned to unseen test sentences.
Table TABREF30 shows the overall alignment results on word alignment task in terms of the metric, alignment error rate. We used the manually-aligned dataset as in BIBREF5 as the test set. Following BIBREF17 , we force-decode both the bilingual sentences including source and reference sentences to obtain the alignment matrices, and then for each target word we extract one-to-one alignments by picking up the source word with the highest alignment confidence as the hard alignment. From Table TABREF30 , we can see clearly that standard NMT (NMT2) is far behind GIZA++ in alignment quality. This shows that it is possible and promising to supervise the attention with GIZA++. With the help from GIZA++, our supervised attention based NMT (SA-NMT) significantly reduces the AER, compared with the unsupervised counterpart (NMT2). This shows that the proposed approach is able to realize our intuition: the alignment is improved, leading to better translation performance.
Note that there is still a gap between SA-NMT and GIZA++ as indicated in Table TABREF30 . Since SA-NMT was trained for machine translation instead of word alignment, it is possible to reduce its AER if we aim to the word alignment task only. For example, we can enlarge INLINEFORM0 in Eq.( EQREF12 ) to bias the training objective towards word alignment task, or we can change the architecture slightly to add the target information crucial for alignment as in BIBREF18 , BIBREF19 .
Results on the Low Resource Translation Task
For the low resource translation task, we used the BTEC corpus as the training data, which consists of 30k sentence pairs with 0.27M Chinese words and 0.33M English words. As development and test sets, we used the CSTAR03 and IWSLT04 held out sets, respectively. We trained a 4-gram language model on the target side of training corpus for running Moses. For training all NMT systems, we employed the same settings as those in the large scale task, except that vocabulary size is 6000, batch size is 16, and the hyper-parameter INLINEFORM0 for SA-NMT.
Table TABREF32 reports the final results. Firstly, we can see that both standard neural machine translation systems NMT1 and NMT2 are much worse than Moses with a substantial gap. This result is not difficult to understand: neural network systems typically require sufficient data to boost their performance, and thus low resource translation tasks are very challenging for them. Secondly, the proposed SA-NMT gains much over NMT2 similar to the case in the large scale task, and the gap towards Moses is narrowed substantially.
While our SA-NMT does not advance the state-of-the-art Moses as in large scale translation, this is a strong result if we consider that previous works on low resource translation tasks: arthur+:2016 gained over Moses on the Japanese-to-English BTEC corpus, but they resorted to a corpus consisting of 464k sentence pairs; luong+manning:2015 revealed the comparable performance to Moses on English-to-Vietnamese with 133k sentences pairs, which is more than 4 times of our corprus size. Our method is possible to advance Moses by using reranking as in BIBREF20 , BIBREF21 , but it is beyond the scope of this paper and instead we remain it as future work.
Related Work
Many recent works have led to notable improvements in the attention mechanism for neural machine translation. tu+:2016 introduced an explicit coverage vector into the attention mechanism to address the over-translation and under-translation inherent in NMT. feng+:2016 proposed an additional recurrent structure for attention to capture long-term dependencies. cheng+:2016 proposed an agreement-based bidirectional NMT model for symmetrizing alignment. cohn+:2016 incorporated multiple structural alignment biases into attention learning for better alignment. All of them improved the attention models that were learned in an unsupervised manner. While we do not modify the attention model itself, we learn it in a supervised manner, therefore our approach is orthogonal to theirs.
It has always been standard practice to learn reordering models from alignments for conventional SMT either at the phrase level or word level. At the phrase level, koehn+:2007 proposed a lexicalized MSD model for phrasal reordering; xiong+:2006 proposed a feature-rich model to learn phrase reordering for BTG; and li+:2014 proposed a neural network method to learn a BTG reordering model. At the word level, bisazza+federico:2016 surveyed many word reordering models learned from alignment models for SMT, and in particular there are some neural network based reordering models, such as BIBREF22 . Our work is inspired by these works in spirit, and it can be considered to be a recurrent neural network based word-level reordering model. The main difference is that in our approach the reordering model and translation model are trained jointly rather than separately as theirs.
Conclusion
It has been shown that attention mechanism in NMT is worse than conventional word alignment models in its alignment accuracy. This paper firstly provides an explanation for this by viewing the atten- tion mechanism from the point view of reordering. Then it proposes a supervised attention for NMT with guidance from external conventional alignment models, inspired by the supervised reordering models in conventional SMT. Experiments on two Chinese-to-English translation tasks show that the proposed approach achieves better alignment results leading to significant gains relative to standard attention based NMT.
Acknowledgements
We would like to thank Xugang Lu for invaluable discussions on this work. | GIZA++ BIBREF3 or fast_align BIBREF4 |
7aae4533dbf097992f23fb2e0574ec5c891ca236 | 7aae4533dbf097992f23fb2e0574ec5c891ca236_0 | Q: Which dataset do they use?
Text: Introduction
Neural Machine Translation (NMT) has achieved great successes on machine translation tasks recently BIBREF0 , BIBREF1 . Generally, it relies on a recurrent neural network under the Encode-Decode framework: it firstly encodes a source sentence into context vectors and then generates its translation token-by-token, selecting from the target vocabulary. Among different variants of NMT, attention based NMT, which is the focus of this paper, is attracting increasing interests in the community BIBREF0 , BIBREF2 . One of its advantages is that it is able to dynamically make use of the encoded context through an attention mechanism thereby allowing the use of fewer hidden layers while still maintaining high levels of translation performance.
An attention mechanism is designed to predict the alignment of a target word with respect to source words. In order to facilitate incremental decoding, it tries to make this alignment prediction without any information about the target word itself, and thus this attention can be considered to be a form of a reordering model (see § SECREF2 for more details). However, it differs from conventional alignment models that are able to use the target word to infer its alignments BIBREF3 , BIBREF4 , BIBREF5 , and as a result there is a substantial gap in quality between the alignments derived by this attention based NMT and conventional alignment models (54 VS 30 in terms of AER for Chinese-to-English as reported in BIBREF6 ). This discrepancy might be an indication that the potential of NMT is limited. In addition, the attention in NMT is learned in an unsupervised manner without explicit prior knowledge about alignment. In contrast, in conventional statistical machine translation (SMT), it is standard practice to learn reordering models in a supervised manner with the guidance from conventional alignment models.
Inspired by the supervised reordering in conventional SMT, in this paper, we propose a Supervised Attention based NMT (SA-NMT) model. Specifically, similar to conventional SMT, we first run off-the-shelf aligners (GIZA++ BIBREF3 or fast_align BIBREF4 etc.) to obtain the alignment of the bilingual training corpus in advance. Then, treating this alignment result as the supervision of attention, we jointly learn attention and translation, both in supervised manners. Since the conventional aligners delivers higher quality alignment, it is expected that the alignment in the supervised attention NMT will be improved leading to better end-to-end translation performance. One advantage of the proposed SA-NMT is that it implements the supervision of attention as a regularization in the joint training objective (§3.2). Furthermore, since the supervision of attention lies in the middle of the entire network architecture rather than the top ( as in the supervision of translation (see Figure 1(b)), it serves to mitigate the vanishing gradient problem during the back-propagation BIBREF7 .
This paper makes the following contributions:
Revisiting Neural Machine Translation
Suppose INLINEFORM0 denotes a source sentence, INLINEFORM1 a target sentence. In addition, let INLINEFORM2 denote a prefix of INLINEFORM3 . Neural Machine Translation (NMT) directly maps a source sentence into a target under an encode-decode framework. In the encoding stage, it uses two bidirectional recurrent neural networks to encode INLINEFORM4 into a sequence of vectors INLINEFORM5 , with INLINEFORM6 representing the concatenation of two vectors for INLINEFORM7 source word from two directional RNNs. In the decoding stage, it generates the target translation from the conditional probability over the pair of sequences INLINEFORM8 and INLINEFORM9 via a recurrent neural network parametrized by INLINEFORM10 as follows: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 respectively denote an RNN hidden state (i.e. a vector) and a context vector at timestep INLINEFORM2 ; INLINEFORM3 is a transformation function mapping into a vector with dimension of the target vocabulary size; and INLINEFORM4 denotes the INLINEFORM5 component of a vector. Furthermore, INLINEFORM7 is defined by an activation function, i.e. a Gated Recurrent Unit BIBREF8 ; and the context vector INLINEFORM8 is a dynamical source representation at timestep INLINEFORM9 , and calculated as the weighted sum of source encodings INLINEFORM10 , i.e. INLINEFORM11 . Here the weight INLINEFORM12 implements an attention mechanism, and INLINEFORM13 is the alignment probability of INLINEFORM14 being aligned to INLINEFORM15 . INLINEFORM16 is derived through a feedforward neural network INLINEFORM17 as follows: DISPLAYFORM0
where INLINEFORM0 consists of two layers, the top one being a softmax layer. We skip the detailed definitions of INLINEFORM1 together with INLINEFORM2 , INLINEFORM3 and INLINEFORM4 , and refer the readers to BIBREF0 instead. Figure 1(a) shows one slice of computational graph for NMT definition at time step INLINEFORM9 .
To train NMT, the following negative log-likelyhood is minimized: DISPLAYFORM0
where INLINEFORM0 is a bilingual sentence pair from a given training corpus, INLINEFORM1 is as defined in Eq.( EQREF5 ). Note that even though the training is conducted in a supervised manner with respect to translation, i.e., INLINEFORM2 are observable in Figure 1(a), the attention is learned in a unsupervised manner, since INLINEFORM3 is hidden.
In Figure 1(a), INLINEFORM0 can not be dependent on INLINEFORM1 , as the target word INLINEFORM2 is unknown at the timestep INLINEFORM3 during the testing. Therefore, at timestep INLINEFORM4 , NMT firstly tries to calculate INLINEFORM5 , through which NMT figures out those source words will be translated next, even though the next target word INLINEFORM6 is unavailable. From this point of view, the attention mechanism plays a role in reordering and thus can be considered as a reordering model. Unlike this attention model, conventional alignment models define the alignment INLINEFORM7 directly over INLINEFORM8 and INLINEFORM9 as follows: INLINEFORM10
where INLINEFORM0 denotes either a log-probability INLINEFORM1 for a generative model like IBM models BIBREF9 or a feature function for discriminative models BIBREF5 . In order to infer INLINEFORM2 , alignment models can readily use the entire INLINEFORM3 , of course including INLINEFORM4 as well, thereby they can model the alignment between INLINEFORM5 and INLINEFORM6 more sufficiently. As a result, the attention based NMT might not deliver satisfying alignments, as reported in BIBREF6 , compared to conventional alignment models. This may be a sign that the potential of NMT is limited in end-to-end translation.
Supervised Attention
In this section, we introduce supervised attention to improve the alignment, which consequently leads to better translation performance for NMT. Our basic idea is simple: similar to conventional SMT, it firstly uses a conventional aligner to obtain the alignment on the training corpus; then it employs these alignment results as supervision to train the NMT. During testing, decoding proceeds in exactly the same manner as standard NMT, since there is no alignment supervision available for unseen test sentences.
Preprocessing Alignment Supervision
As described in §2, the attention model outputs a soft alignment INLINEFORM0 , such that INLINEFORM1 is a normalized probability distribution. In contrast, most aligners are typically oriented to grammar induction for conventional SMT, and they usually output `hard' alignments, such as BIBREF3 . They only indicate whether a target word is aligned to a source word or not, and this might not correspond to a distribution for each target word. For example, one target word may align to multiple source words, or no source words at all.
Therefore, we apply the following heuristics to preprocess the hard alignment: if a target word does not align to any source words, we inherit its affiliation from the closest aligned word with preference given to the right, following BIBREF10 ; if a target word is aligned to multiple source words, we assume it aligns to each one evenly. In addition, in the implementation of NMT, there are two special tokens `eol' added to both source and target sentences. We assume they are aligned to each other. In this way, we can obtain the final supervision of attention, denoted as INLINEFORM0 .
Jointly Supervising Translation and Attention
We propose a soft constraint method to jointly supervise the translation and attention as follows: DISPLAYFORM0
where INLINEFORM0 is as defined in Eq. ( EQREF5 ), INLINEFORM1 is a loss function that penalizes the disagreement between INLINEFORM2 and INLINEFORM3 , and INLINEFORM4 is a hyper-parameter that balances the preference between likelihood and disagreement. In this way, we treat the attention variable INLINEFORM5 as an observable variable as shown in Figure 1(b), and this is different from the standard NMT as shown in Figure 1(a) in essence. Note that this training objective resembles to that in multi-task learning BIBREF11 . Our supervised attention method has two further advantages: firstly, it is able to alleviate overfitting by means of the INLINEFORM6 ; and secondly it is capable of addressing the vanishing gradient problem because the supervision of INLINEFORM7 is more close to INLINEFORM8 than INLINEFORM9 as in Figure 1(b).
In order to quantify the disagreement between INLINEFORM0 and INLINEFORM1 , three different methods are investigated in our experiments:
Mean Squared Error (MSE) INLINEFORM0
MSE is widely used as a loss for regression tasks BIBREF12 , and it directly encourages INLINEFORM0 to be equal to INLINEFORM1 .
Multiplication (MUL) INLINEFORM0
MUL is particularly designed for agreement in word alignment and it has been shown to be effective BIBREF13 , BIBREF6 . Note that different from those in BIBREF6 , INLINEFORM0 is not a parametrized variable but a constant in this paper.
Cross Entropy (CE) INLINEFORM0
Since for each INLINEFORM0 , INLINEFORM1 is a distribution, it is natural to use CE as the metric to evaluate the disagreement BIBREF14 .
Experiments
We conducted experiments on two Chinese-to-English translation tasks: one is the NIST task oriented to NEWS domain, which is a large scale task and suitable to NMT; and the other is the speech translation oriented to travel domain, which is a low resource task and thus is very challenging for NMT. We used the case-insensitive BLEU4 to evaluate translation quality and adopted the multi-bleu.perl as its implementation.
The Large Scale Translation Task
We used the data from the NIST2008 Open Machine Translation Campaign. The training data consisted of 1.8M sentence pairs, the development set was nist02 (878 sentences), and the test sets are were nist05 (1082 sentences), nist06 (1664 sentences) and nist08 (1357 sentences).
We compared the proposed approach with three strong baselines:
Moses: a phrase-based machine translation system BIBREF15 ;
NMT1: an attention based NMT BIBREF0 system at https://github.com/lisa-groundhog/GroundHog;
NMT2: another implementation of BIBREF0 at https://github.com/nyu-dl/dl4mt-tutorial.
We developed the proposed approach based on NMT2, and denoted it as SA-NMT.
We followed the standard pipeline to run Moses. GIZA++ with grow-diag-final-and was used to build the translation model. We trained a 5-gram target language model on the Gigaword corpus, and used a lexicalized distortion model. All experiments were run with the default settings.
To train NMT1, NMT2 and SA-NMT, we employed the same settings for fair comparison. Specifically, except the stopping iteration which was selected using development data, we used the default settings set out in BIBREF0 for all NMT-based systems: the dimension of word embedding was 620, the dimension of hidden units was 1000, the batch size was 80, the source and target side vocabulary sizes were 30000, the maximum sequence length was 50, the beam size for decoding was 12, and the optimization was done by Adadelta with all hyper-parameters suggested by BIBREF16 . Particularly for SA-NMT, we employed a conventional word aligner to obtain the word alignment on the training data before training SA-NMT. In this paper, we used two different aligners, which are fast_align and GIZA++. We tuned the hyper-parameter INLINEFORM0 to be 0.3 on the development set, to balance the preference between the translation and alignment. Training was conducted on a single Tesla K40 GPU machine. Each update took about 3.0 seconds for both NMT2 and SA-NMT, and 2.4 seconds for NMT1. Roughly, it took about 10 days to NMT2 to finish 300000 updates.
We implemented three different losses to supervise the attention as described in §3.2. To explore their behaviors on the development set, we employed the GIZA++ to generate the alignment on the training set prior to the training SA-NMT. In Table TABREF21 , we can see that MUL is better than MSE. Furthermore, CE performs best among all losses, and thus we adopt it for the following experiments.
In addition, we also run fast_align to generate alignments as the supervision for SA-NMT and the results were reported in Table TABREF22 . We can see that GIZA++ performs slightly better than fast_align and thus we fix the external aligner as GIZA++ in the following experiments.
Figure FIGREF26 shows the learning curves of NMT2 and SA-NMT on the development set. We can see that NMT2 generally obtains higher BLEU as the increasing of updates before peaking at update of 150000, while it is unstable from then on. On the other hand, SA-NMT delivers much better BLEU for the beginning updates and performs more steadily along with the updates, although it takes more updates to reach the peaking point.
Table TABREF27 reports the main end-to-end translation results for the large scale task. We find that both standard NMT generally outperforms Moses except NMT1 on nist05. The proposed SA-NMT achieves significant and consistent improvements over all three baseline systems, and it obtains the averaged gains of 2.2 BLEU points on test sets over its direct baseline NMT2. It is clear from these results that our supervised attention mechanism is highly effective in practice.
As explained in §2, standard NMT can not use the target word information to predict its aligned source words, and thus might fail to predict the correct source words for some target words. For example, for the sentence in the training set in Figure FIGREF29 (a), NMT2 aligned `following' to `皮诺契特 (gloss: pinochet)' rather than `继 (gloss: follow)', and worse still it aligned the word `.' to `在 (gloss: in)' rather than `。' even though this word is relatively easy to align correctly. In contrast, with the help of information from the target word itself, GIZA++ successfully aligned both `following' and `.' to the expected source words (see Figure FIGREF29 (c)). With the alignment results from GIZA++ as supervision, we can see that our SA-NMT can imitate GIZA++ and thus align both words correctly. More importantly, for sentences in the unseen test set, like GIZA++, SA-NMT confidently aligned `but' and `.' to their correct source words respectively as in Figure FIGREF29 (b), where NMT2 failed. It seems that SA-NMT can learn its alignment behavior from GIZA++, and subsequently apply the alignment abilities it has learned to unseen test sentences.
Table TABREF30 shows the overall alignment results on word alignment task in terms of the metric, alignment error rate. We used the manually-aligned dataset as in BIBREF5 as the test set. Following BIBREF17 , we force-decode both the bilingual sentences including source and reference sentences to obtain the alignment matrices, and then for each target word we extract one-to-one alignments by picking up the source word with the highest alignment confidence as the hard alignment. From Table TABREF30 , we can see clearly that standard NMT (NMT2) is far behind GIZA++ in alignment quality. This shows that it is possible and promising to supervise the attention with GIZA++. With the help from GIZA++, our supervised attention based NMT (SA-NMT) significantly reduces the AER, compared with the unsupervised counterpart (NMT2). This shows that the proposed approach is able to realize our intuition: the alignment is improved, leading to better translation performance.
Note that there is still a gap between SA-NMT and GIZA++ as indicated in Table TABREF30 . Since SA-NMT was trained for machine translation instead of word alignment, it is possible to reduce its AER if we aim to the word alignment task only. For example, we can enlarge INLINEFORM0 in Eq.( EQREF12 ) to bias the training objective towards word alignment task, or we can change the architecture slightly to add the target information crucial for alignment as in BIBREF18 , BIBREF19 .
Results on the Low Resource Translation Task
For the low resource translation task, we used the BTEC corpus as the training data, which consists of 30k sentence pairs with 0.27M Chinese words and 0.33M English words. As development and test sets, we used the CSTAR03 and IWSLT04 held out sets, respectively. We trained a 4-gram language model on the target side of training corpus for running Moses. For training all NMT systems, we employed the same settings as those in the large scale task, except that vocabulary size is 6000, batch size is 16, and the hyper-parameter INLINEFORM0 for SA-NMT.
Table TABREF32 reports the final results. Firstly, we can see that both standard neural machine translation systems NMT1 and NMT2 are much worse than Moses with a substantial gap. This result is not difficult to understand: neural network systems typically require sufficient data to boost their performance, and thus low resource translation tasks are very challenging for them. Secondly, the proposed SA-NMT gains much over NMT2 similar to the case in the large scale task, and the gap towards Moses is narrowed substantially.
While our SA-NMT does not advance the state-of-the-art Moses as in large scale translation, this is a strong result if we consider that previous works on low resource translation tasks: arthur+:2016 gained over Moses on the Japanese-to-English BTEC corpus, but they resorted to a corpus consisting of 464k sentence pairs; luong+manning:2015 revealed the comparable performance to Moses on English-to-Vietnamese with 133k sentences pairs, which is more than 4 times of our corprus size. Our method is possible to advance Moses by using reranking as in BIBREF20 , BIBREF21 , but it is beyond the scope of this paper and instead we remain it as future work.
Related Work
Many recent works have led to notable improvements in the attention mechanism for neural machine translation. tu+:2016 introduced an explicit coverage vector into the attention mechanism to address the over-translation and under-translation inherent in NMT. feng+:2016 proposed an additional recurrent structure for attention to capture long-term dependencies. cheng+:2016 proposed an agreement-based bidirectional NMT model for symmetrizing alignment. cohn+:2016 incorporated multiple structural alignment biases into attention learning for better alignment. All of them improved the attention models that were learned in an unsupervised manner. While we do not modify the attention model itself, we learn it in a supervised manner, therefore our approach is orthogonal to theirs.
It has always been standard practice to learn reordering models from alignments for conventional SMT either at the phrase level or word level. At the phrase level, koehn+:2007 proposed a lexicalized MSD model for phrasal reordering; xiong+:2006 proposed a feature-rich model to learn phrase reordering for BTG; and li+:2014 proposed a neural network method to learn a BTG reordering model. At the word level, bisazza+federico:2016 surveyed many word reordering models learned from alignment models for SMT, and in particular there are some neural network based reordering models, such as BIBREF22 . Our work is inspired by these works in spirit, and it can be considered to be a recurrent neural network based word-level reordering model. The main difference is that in our approach the reordering model and translation model are trained jointly rather than separately as theirs.
Conclusion
It has been shown that attention mechanism in NMT is worse than conventional word alignment models in its alignment accuracy. This paper firstly provides an explanation for this by viewing the atten- tion mechanism from the point view of reordering. Then it proposes a supervised attention for NMT with guidance from external conventional alignment models, inspired by the supervised reordering models in conventional SMT. Experiments on two Chinese-to-English translation tasks show that the proposed approach achieves better alignment results leading to significant gains relative to standard attention based NMT.
Acknowledgements
We would like to thank Xugang Lu for invaluable discussions on this work. | BTEC corpus, the CSTAR03 and IWSLT04 held out sets, the NIST2008 Open Machine Translation Campaign |
c80669cb444a6ec6249b971213b0226f59940a82 | c80669cb444a6ec6249b971213b0226f59940a82_0 | Q: On average, by how much do they reduce the diarization error?
Text: Introduction
Speaker diarization is the task of segmenting an audio recording in time, indexing each segment by speaker identity. In the standard version of the task BIBREF0, the goal is not to identify known speakers, but to co-index segments that are attributed to the same speaker; in other words, the task implies finding speaker boundaries and grouping segments that belong to the same speaker (including determining the number of distinct speakers). Often diarization is run, in parallel or in sequence, with speech recognition with the goal of achieving speaker-attributed speech-to-text transcription BIBREF1.
Ensemble classifiers BIBREF2 are a common way of boosting the performance of machine learning systems, by pooling the outputs of multiple classifiers. In speech processing, they have been used extensively whenever multiple, separately trained speech recognizers are available, and the goal is to achieve better performance with little additional integration or modeling overhead. The most well-known of these methods in speech processing is ROVER (recognition output voting for error reduction) BIBREF3. ROVER aligns the outputs of multiple recognizers word-by-word, and then decides on the most probable word at each position by simple majority or confidence-weighted vote. Confusion network combination (CNC) is a generalization of this idea that makes use of multiple word hypotheses (e.g., in lattice or n-best form) from each recognizer BIBREF4, BIBREF5.
Given the pervasive use and effectiveness of ensemble methods, it is perhaps surprising that so far no ensemble algorithm has been used widely for diarization. In this paper we present such an algorithm and apply it to the problem of combining the diarization output obtained from parallel recording channels. This scenario arises naturally when processing speech captured by multiple microphones, even when the raw signals are combined using beamforming (because multiple beams can be formed and later combined for improved accuracy, as described in BIBREF6). In a nod to the ROVER algorithm, we call the algorithm DOVER (diarization output voting for error reduction). As discussed later, while DOVER is not a variant of ROVER, a duality can be observed between the two algorithms.
Section SECREF2 presents the DOVER algorithm. Section SECREF3 describes the experiments we ran to test it on two different datasets involving multi-microphone speech capture. Section SECREF4 concludes and points out open problems and future directions.
The Algorithm ::: Motivation and prior work
The reason that combining diarization outputs in a ROVER-like manner is not straightforward is the complex structure of the task: a diarization system has to perform segmentation (finding speaker boundaries) and decisions about identity of speakers across segments. Where those functions are performed by specialized classifiers inside the diarization algorithm, ensemble methods could easily be used. For example, multiple speaker change detectors could vote on a consensus, or a speaker clustering algorithm could combine multiple acoustic embeddings to evaluate cluster similarity BIBREF7.
However, if we are given only the outputs of multiple diarization processes for the same input, or the diarization systems are only available as black boxes, it is not clear on what part of the output one should “vote”, and how to combine the various hypotheses.
One approach would be to solve diarization as an integer linear programming (ILP) problem BIBREF8. In ILP-based diarization, a speaker labeling is found that is the best fit to a collection of local measures of speaker similarity (i.e., the similarity of speech at times $i$ and $j$ is commensurate with the cost of assigning different speaker labels to $i$ and $j$). We could translate the different diarization outputs into a set of local similarity costs, pool the costs that pertain to the same locations of speech, and then find a new diarization labeling with ILP. A similar approach has been used for ensemble segmentation of images BIBREF9. However, ILP is computationally costly and therefore not widely used in diarization practice.
The prior method that comes closest to our purpose is a proposal by Tranter BIBREF10, in which pairs of diarization outputs are combined. The method identifies regions in the audio on which both input diarizations agree, and passes them through to the output. Disagreements between the inputs are adjudicated by evaluating speaker identity/nonidentity according to an external classifier (typically a version of the Bayes information criterion, BIC BIBREF11). Our goal in this work is to reconcile an arbitrary number of diarization outputs, and to do so using only the outputs themselves, without requiring further examination of the acoustic evidence.
The Algorithm ::: The DOVER approach
Our algorithm maps the anonymous speaker labels from multiple diarization outputs into a common label space, and then performs a simple voting for each region of audio. A “region” for this purpose is a maximal segment delimited by any of the original speaker boundaries, from any of the input segmentations. The combined (or consensus) labeling is then obtained by stringing the majority labels for all regions together.
The remaining question is how labels are to be mapped to a common label space. We do so by using the same criterion as used by the diarization error (DER) metric itself, since the goal of the algorithm is to minimize the expected mismatch between two diarization label sequences. Given two diarization outputs using labels $A_1, A_2, \ldots , A_m$ and $B_1, B_2, \ldots , B_n$, respectively, an injective mapping from $\lbrace A_i\rbrace $ to $\lbrace B_j \rbrace $ is found that minimizes the total time duration of speaker mismatches, as well as mismatches between speech and nonspeech. Any labels that have no correspondence (e.g., due to differing numbers of speakers) are retained. For more than two diarization outputs, a global mapping is constructed incrementally: after mapping the second output to the labels of the first, the third output is mapped to the first two. This is repeated until all diarization outputs are incorporated. Whenever there is a conflict arising from mapping the $i$th output to each of the prior $i-1$ outputs, it is resolved in favor of the label pairing sharing the longest common duration (overlap in time).
Speech/nonspeech decisions are aggregated by outputting a speaker label if and only if the total vote tally for all speaker labels is at least half the total of all inputs, i.e., the probability of speech is $\ge 0.5$.
It is straightforward to generalize the algorithm to weighted inputs. Instead of each input diarization having equal weight (one system, one vote), the final voting step adds up the weights attached to the individual systems; the winning label again is the one with the highest tally. The weighted-voting version of the algorithm is spelled out in detail in Figure FIGREF5.
The Algorithm ::: An example
Figure FIGREF7 shows the workings of the algorithm for three inputs (diarization system outputs) A, B, and C. For simplicity, non-speech regions are omitted. Also for simplicity, the inputs are given equal weight. Step 1 shows the original speaker labelings. In Step 2 of the algorithm, the labels from System B have been mapped to labels from System A, using the minimum-diarization-cost criterion. In Step 3, the output of System C has been mapped to the (already mapped, where applicable) outputs from Systems A and B. The result is that all three diarization versions now use the same labels where possible, and in the final step (voting) the consensus labels are determined by taking the majority label for each segmentation region.
Note that the final output contains one region (shown in blue shading) for which no majority label exists, since each of the labels “A1”, “A2” and “C2” had only one vote. In our experiments, we break ties by picking the first label. Alternatively, a random label could be picked, or the region in question could be apportioned equally to the competing labels (e.g., choosing a temporal ordering that minimizes speaker changes).
The Algorithm ::: Anchoring the label mapping
The construction of the global label mapping is greedy, and dependent on the ordering of input systems. (A non-greedy, global optimization of the label mapping for all $N$ inputs would be exponential in the number of inputs $N$.) The choice of the first input, in particular, could affect the quality of results, since it anchors the computation of all label mappings. One strategy is to pick the centroid, i.e., the diarization hypothesis that has the smallest aggregate distance (DER) to all the other diarization outputs. Another, more costly, approach is to run the algorithm $N$ times, once for each input as the anchor. Then, the $N$ DOVER outputs are themselves combined again (with equal weights) in another run of the algorithm. For $N$ inputs, this multiplies the overall computation by a factor of $N+1$.
In our experiments we use a variant of the centroid approach: The input diarization hypotheses are ranked by their average DER to all the other hypotheses. The result is that the centroid comes first, but outlier hypotheses also tend to end up at the bottom of the ranking. We then apply weights to the hypotheses that decay slowly from 1, as a function of rank:
The effect of this is that two lower-ranked hypotheses that agree can still override a single higher-ranked hypothesis, but ties are broken in favor of the higher-ranked hypothesis. (If the inputs came with externally supplied ranks, we multiply them with the rank-based weights.)
The Algorithm ::: Duality of DOVER and ROVER
ROVER and DOVER solve different kinds of tasks: the former manipulates words labels at discrete positions in a sequence, whereas the latter manipulates anonymous speaker labels positioned on a continuous time axis. However, there is an interesting duality between the two algorithms.
In ROVER, the input (word) labels already live in a common name space (the vocabulary) and need to be aligned in time. In DOVER, the input (speaker) labels live on a common time axis and need to be aligned in a common name space (mapped). After those two kinds of label alignment are completed, the voting step is similar in the two algorithms. Note, also, that the distinction between word sequence and label alignment mirrors the different error metrics. Word error is mediated by a string alignment that minimizes edit distance. Diarization error is mediated by a speaker label alignment (i.e., mapping) that minimizes the sum of speaker and speech/nonspeech error.
Experiments and Results ::: Data
We validated the DOVER algorithm on two datasets of meeting recordings with multi microphone channels. Our focus on this genre of speech is motivated by our overall interest in technology that can create high-quality speaker-attributed transcripts of multi-person meetings.
The first dataset was drawn from the NIST 2007 Rich Transcription (RT-07) evaluation BIBREF13. The RT-07 “conference meeting” test set consists of 8 meetings from four different recording sites, of varying lengths and with the number of microphones ranging from 3 to 16. Each meeting has from four to six participants, with 31 distinct speakers in total. Diarization error is evaluated on a 22-minute speaker-labeled excerpt from each meeting.
The second dataset consists of 5 internal meetings used in Microsoft's “Project Denmark” BIBREF6. Three of the five meetings were recorded with seven independent consumer devices, followed by automatic synchronization as described in BIBREF14. The other two meetings were recorded with a seven-channel circular microphone array. The meetings took place in several different rooms and lasted for 30 minutes to one hour each, with three to eleven participants per meeting. The meetings were neither scripted nor staged; the participants were familiar with each other and conducted normal work discussions. The diarization reference labels were derived from time- and speaker-marked transcripts created by professional transcribers based on both close-talking and far-field recordings.
Experiments and Results ::: Diarization system
All original diarization outputs for input to DOVER were created with a reimplementation of the ICSI diarization algorithm BIBREF15. The algorithm starts with a uniform segmentation of the audio into snippets of equal duration where each segment constitutes its own speaker cluster, followed by iterative agglomerative clustering and resegmentation. Distance between speaker clusters is measured by the log likelihood difference between a single-speaker hypothesis (one Gaussian mixture model) versus the two-speaker hypothesis (two GMMs). In each iteration, the two most similar speaker clusters are merged, followed by a resegmentation of the entire audio stream by Viterbi alignment to an ergodic HMM over all speaker models. The merging process stops when a BIC-like criterion BIBREF16 indicates no further gains in the model likelihood. When multiple feature streams are used, as described below, the data is modeled by a weighted combination of separate GMMs for each stream.
No attempt is made to detect overlapping speech; therefore all our results have an error rate floor that corresponds to the proportion of overlapped speech (about 10% in the Denmark data).
Experiments and Results ::: Experiments on RT-07 data
We processed the NIST conference meetings using the weighted delay-and-sum BeamformIt tool BIBREF17, using $N-1$ audio channels at a time, and resulting in $N$ different audio streams. This is the same leave-one-out strategy as described in BIBREF18 for speech recognition. Furthermore, we rotated the choice of reference channel in these runs to further increase diversity among the outputs, as advocated in BIBREF14. We then ran diarization on each of the resulting audio streams, and DOVER on their outputs. Speech activity was obtained from an HMM-based algorithm that was part of the SRI-ICSI meeting recognition system originally used in the RT-07 evaluation BIBREF19.
Three different feature sets were used in diarization:
Mel-frequency cepstral coefficients (MFCCs), 19 dimensions, extracted every 10 ms from the raw waveforms (no beamforming)
MFCCs extracted from the beamformed audio
MFCCs from beamformed audio, augmented with a vector of estimated time-differences-of-arrival (TDOAs) between the different channels, following BIBREF20
Table TABREF17 shows the outcomes. The first three result columns give speaker error rates for the individual audio channels. Note that the “min” value is an oracle result, i.e., the best that one could do by picking a single channel for diarization. The last two columns give the speaker error and overall DER for the DOVER-combined diarization output. Note that the difference between speaker error and DER is nearly constant, since all systems use the same speech activity information. The missed speech rate is about 3.9%, while the false alarm rate is 4.6%.
Looking at the first three columns, we observe that the range of error rates is very large (greater than 10% absolute) depending on which channel is chosen. The DOVER-generated diarization has error rates that are closer to the oracle choice (minimum error) than to the average error, thereby avoiding the risk of a poor choice of channel.
Experiments and Results ::: Experiments on Project Denmark data
For experiments on this dataset, we used byproducts of the Project Denmark meeting transcription system described in BIBREF14. The system aligns the (possibly unsynchronized) audio streams, and then performs leave-one-out beamforming on 6 out of 7 audio streams, round-robin, resulting in 7 different new audio streams. For purposes of speaker identification, it then computes 128-dimensional d-vectors (acoustic speaker embeddings from a neural network trained to perform speaker ID BIBREF22) at 320 ms intervals. The beamformed audio streams are also transcribed by a speech recognition (ASR) system. Here we use the ASR output only as a speech activity detector (joining words separated by no more than 0.1 s of nonspeech, and padding 0.5 s at the boundaries).
While the Denmark system currently performs speaker identification using enrolled profiles, we are simulating a scenario where speaker-agnostic diarization is applied instead (or in addition, e.g, if only a subset of speakers is known to the system). Since the Denmark audio channels are symmetrical, and no audio channel has privileged status, we would have to either select one channel for diarization, or perform diarization on all channels and combine the outputs; this is where DOVER naturally finds an application.
We ran experiments with three sets of acoustic features, all extracted from the beamformed audio:
MFCCs, 19 dimensions, extracted every 10 ms
MFCCs plus the first 30 principal components of the d-vectors (replicated to match the frame-rate of the MFCCs)
MFCCs plus $3 \times 30$ principal components from 3 out of the 7 d-vector streams, i.e., a partial feature-level combination of audio streams. For channel $i$ the d-vectors were taken from channel $i$ itself, $i-1\pmod {7}$, and $i+1\pmod {7}$.
We also took the outputs of the speaker ID component of the system (from each beamformed audio channel), treated them as diarization labels, and ran DOVER to see if the algorithm could improve the results.
Table TABREF23 shows the results for diarization based on the three feature set, as well as based on speaker ID, using the same format as for the RT-07 results. Here, too, the difference between speaker error and DER is nearly constant, since all systems use the same speech activity information derived from the speech recognizer. The DER thus includes about 0.6% false alarms and 11.3% miss rate (of which 10.0% are due to overlapped speech, which we do not attempt to detect).
The most important observation is that the DOVER output has a speaker error rate that is very close to, and for the most part slightly lower than, the best (oracle) choice of channel. As for the RT-07 data, the DOVER output is consistently much better than the channel average. Also, the max values show that there is still ample opportunity for very poor choices of a single channel; DOVER removes the need to make that choice.
The last row of results shows that even when the diarization on individual channels is very accurate (due to the availability of speaker models), DOVER can still give a substantial relative error reduction, surpassing the best channel's performance.
Conclusions and Outlook
We have presented a weighted voting algorithm for combining the outputs from several diarization systems over a shared input. The DOVER algorithm first uses a DER-minimizing criterion to map all speaker labels to a common name space, and then performs majority voting at each time instant (including on whether there is speech or not). The proposed method naturally lends itself to unifying diarization outputs obtained from parallel audio channels, e.g., as they arise from meeting capture with multiple microphones or devices. We tested the algorithm on a NIST conference meeting evaluation set, as well as on internal meetings, using diarization by agglomerative clustering combined with a variety of feature streams. We find that the DOVER output consistently beats the averages of the input channels, and can be very close or improving on the oracle error rate obtained by picking the single best channel for a given meeting.
Some interesting open issues remain. As mentioned, we currently do not attempt to diarize overlapping speech. Once such a capability is available, the DOVER algorithm will have to be modified to handle simultaneous speakers. Another issue is that current diarization systems only output their single best guesses at the speaker labeling. In analogy to confusion network combination, we may want to consider diarization algorithms that produce multiple weighted hypotheses, which are then in turn combined across all systems. A modified DOVER could be used both to generate the “speaker confusion networks” from individual diarization systems, and to combine them.
Acknowledgments
We thank our colleagues for help with the Denmark system and data collection, Xavi Anguera for answering questions regarding BeamformIt, and ICSI for assistance with the RT-07 data. | Unanswerable |
10045d7dac063013a8447b5a4bc3a3c2f18f9e82 | 10045d7dac063013a8447b5a4bc3a3c2f18f9e82_0 | Q: Do they compare their algorithm to voting without weights?
Text: Introduction
Speaker diarization is the task of segmenting an audio recording in time, indexing each segment by speaker identity. In the standard version of the task BIBREF0, the goal is not to identify known speakers, but to co-index segments that are attributed to the same speaker; in other words, the task implies finding speaker boundaries and grouping segments that belong to the same speaker (including determining the number of distinct speakers). Often diarization is run, in parallel or in sequence, with speech recognition with the goal of achieving speaker-attributed speech-to-text transcription BIBREF1.
Ensemble classifiers BIBREF2 are a common way of boosting the performance of machine learning systems, by pooling the outputs of multiple classifiers. In speech processing, they have been used extensively whenever multiple, separately trained speech recognizers are available, and the goal is to achieve better performance with little additional integration or modeling overhead. The most well-known of these methods in speech processing is ROVER (recognition output voting for error reduction) BIBREF3. ROVER aligns the outputs of multiple recognizers word-by-word, and then decides on the most probable word at each position by simple majority or confidence-weighted vote. Confusion network combination (CNC) is a generalization of this idea that makes use of multiple word hypotheses (e.g., in lattice or n-best form) from each recognizer BIBREF4, BIBREF5.
Given the pervasive use and effectiveness of ensemble methods, it is perhaps surprising that so far no ensemble algorithm has been used widely for diarization. In this paper we present such an algorithm and apply it to the problem of combining the diarization output obtained from parallel recording channels. This scenario arises naturally when processing speech captured by multiple microphones, even when the raw signals are combined using beamforming (because multiple beams can be formed and later combined for improved accuracy, as described in BIBREF6). In a nod to the ROVER algorithm, we call the algorithm DOVER (diarization output voting for error reduction). As discussed later, while DOVER is not a variant of ROVER, a duality can be observed between the two algorithms.
Section SECREF2 presents the DOVER algorithm. Section SECREF3 describes the experiments we ran to test it on two different datasets involving multi-microphone speech capture. Section SECREF4 concludes and points out open problems and future directions.
The Algorithm ::: Motivation and prior work
The reason that combining diarization outputs in a ROVER-like manner is not straightforward is the complex structure of the task: a diarization system has to perform segmentation (finding speaker boundaries) and decisions about identity of speakers across segments. Where those functions are performed by specialized classifiers inside the diarization algorithm, ensemble methods could easily be used. For example, multiple speaker change detectors could vote on a consensus, or a speaker clustering algorithm could combine multiple acoustic embeddings to evaluate cluster similarity BIBREF7.
However, if we are given only the outputs of multiple diarization processes for the same input, or the diarization systems are only available as black boxes, it is not clear on what part of the output one should “vote”, and how to combine the various hypotheses.
One approach would be to solve diarization as an integer linear programming (ILP) problem BIBREF8. In ILP-based diarization, a speaker labeling is found that is the best fit to a collection of local measures of speaker similarity (i.e., the similarity of speech at times $i$ and $j$ is commensurate with the cost of assigning different speaker labels to $i$ and $j$). We could translate the different diarization outputs into a set of local similarity costs, pool the costs that pertain to the same locations of speech, and then find a new diarization labeling with ILP. A similar approach has been used for ensemble segmentation of images BIBREF9. However, ILP is computationally costly and therefore not widely used in diarization practice.
The prior method that comes closest to our purpose is a proposal by Tranter BIBREF10, in which pairs of diarization outputs are combined. The method identifies regions in the audio on which both input diarizations agree, and passes them through to the output. Disagreements between the inputs are adjudicated by evaluating speaker identity/nonidentity according to an external classifier (typically a version of the Bayes information criterion, BIC BIBREF11). Our goal in this work is to reconcile an arbitrary number of diarization outputs, and to do so using only the outputs themselves, without requiring further examination of the acoustic evidence.
The Algorithm ::: The DOVER approach
Our algorithm maps the anonymous speaker labels from multiple diarization outputs into a common label space, and then performs a simple voting for each region of audio. A “region” for this purpose is a maximal segment delimited by any of the original speaker boundaries, from any of the input segmentations. The combined (or consensus) labeling is then obtained by stringing the majority labels for all regions together.
The remaining question is how labels are to be mapped to a common label space. We do so by using the same criterion as used by the diarization error (DER) metric itself, since the goal of the algorithm is to minimize the expected mismatch between two diarization label sequences. Given two diarization outputs using labels $A_1, A_2, \ldots , A_m$ and $B_1, B_2, \ldots , B_n$, respectively, an injective mapping from $\lbrace A_i\rbrace $ to $\lbrace B_j \rbrace $ is found that minimizes the total time duration of speaker mismatches, as well as mismatches between speech and nonspeech. Any labels that have no correspondence (e.g., due to differing numbers of speakers) are retained. For more than two diarization outputs, a global mapping is constructed incrementally: after mapping the second output to the labels of the first, the third output is mapped to the first two. This is repeated until all diarization outputs are incorporated. Whenever there is a conflict arising from mapping the $i$th output to each of the prior $i-1$ outputs, it is resolved in favor of the label pairing sharing the longest common duration (overlap in time).
Speech/nonspeech decisions are aggregated by outputting a speaker label if and only if the total vote tally for all speaker labels is at least half the total of all inputs, i.e., the probability of speech is $\ge 0.5$.
It is straightforward to generalize the algorithm to weighted inputs. Instead of each input diarization having equal weight (one system, one vote), the final voting step adds up the weights attached to the individual systems; the winning label again is the one with the highest tally. The weighted-voting version of the algorithm is spelled out in detail in Figure FIGREF5.
The Algorithm ::: An example
Figure FIGREF7 shows the workings of the algorithm for three inputs (diarization system outputs) A, B, and C. For simplicity, non-speech regions are omitted. Also for simplicity, the inputs are given equal weight. Step 1 shows the original speaker labelings. In Step 2 of the algorithm, the labels from System B have been mapped to labels from System A, using the minimum-diarization-cost criterion. In Step 3, the output of System C has been mapped to the (already mapped, where applicable) outputs from Systems A and B. The result is that all three diarization versions now use the same labels where possible, and in the final step (voting) the consensus labels are determined by taking the majority label for each segmentation region.
Note that the final output contains one region (shown in blue shading) for which no majority label exists, since each of the labels “A1”, “A2” and “C2” had only one vote. In our experiments, we break ties by picking the first label. Alternatively, a random label could be picked, or the region in question could be apportioned equally to the competing labels (e.g., choosing a temporal ordering that minimizes speaker changes).
The Algorithm ::: Anchoring the label mapping
The construction of the global label mapping is greedy, and dependent on the ordering of input systems. (A non-greedy, global optimization of the label mapping for all $N$ inputs would be exponential in the number of inputs $N$.) The choice of the first input, in particular, could affect the quality of results, since it anchors the computation of all label mappings. One strategy is to pick the centroid, i.e., the diarization hypothesis that has the smallest aggregate distance (DER) to all the other diarization outputs. Another, more costly, approach is to run the algorithm $N$ times, once for each input as the anchor. Then, the $N$ DOVER outputs are themselves combined again (with equal weights) in another run of the algorithm. For $N$ inputs, this multiplies the overall computation by a factor of $N+1$.
In our experiments we use a variant of the centroid approach: The input diarization hypotheses are ranked by their average DER to all the other hypotheses. The result is that the centroid comes first, but outlier hypotheses also tend to end up at the bottom of the ranking. We then apply weights to the hypotheses that decay slowly from 1, as a function of rank:
The effect of this is that two lower-ranked hypotheses that agree can still override a single higher-ranked hypothesis, but ties are broken in favor of the higher-ranked hypothesis. (If the inputs came with externally supplied ranks, we multiply them with the rank-based weights.)
The Algorithm ::: Duality of DOVER and ROVER
ROVER and DOVER solve different kinds of tasks: the former manipulates words labels at discrete positions in a sequence, whereas the latter manipulates anonymous speaker labels positioned on a continuous time axis. However, there is an interesting duality between the two algorithms.
In ROVER, the input (word) labels already live in a common name space (the vocabulary) and need to be aligned in time. In DOVER, the input (speaker) labels live on a common time axis and need to be aligned in a common name space (mapped). After those two kinds of label alignment are completed, the voting step is similar in the two algorithms. Note, also, that the distinction between word sequence and label alignment mirrors the different error metrics. Word error is mediated by a string alignment that minimizes edit distance. Diarization error is mediated by a speaker label alignment (i.e., mapping) that minimizes the sum of speaker and speech/nonspeech error.
Experiments and Results ::: Data
We validated the DOVER algorithm on two datasets of meeting recordings with multi microphone channels. Our focus on this genre of speech is motivated by our overall interest in technology that can create high-quality speaker-attributed transcripts of multi-person meetings.
The first dataset was drawn from the NIST 2007 Rich Transcription (RT-07) evaluation BIBREF13. The RT-07 “conference meeting” test set consists of 8 meetings from four different recording sites, of varying lengths and with the number of microphones ranging from 3 to 16. Each meeting has from four to six participants, with 31 distinct speakers in total. Diarization error is evaluated on a 22-minute speaker-labeled excerpt from each meeting.
The second dataset consists of 5 internal meetings used in Microsoft's “Project Denmark” BIBREF6. Three of the five meetings were recorded with seven independent consumer devices, followed by automatic synchronization as described in BIBREF14. The other two meetings were recorded with a seven-channel circular microphone array. The meetings took place in several different rooms and lasted for 30 minutes to one hour each, with three to eleven participants per meeting. The meetings were neither scripted nor staged; the participants were familiar with each other and conducted normal work discussions. The diarization reference labels were derived from time- and speaker-marked transcripts created by professional transcribers based on both close-talking and far-field recordings.
Experiments and Results ::: Diarization system
All original diarization outputs for input to DOVER were created with a reimplementation of the ICSI diarization algorithm BIBREF15. The algorithm starts with a uniform segmentation of the audio into snippets of equal duration where each segment constitutes its own speaker cluster, followed by iterative agglomerative clustering and resegmentation. Distance between speaker clusters is measured by the log likelihood difference between a single-speaker hypothesis (one Gaussian mixture model) versus the two-speaker hypothesis (two GMMs). In each iteration, the two most similar speaker clusters are merged, followed by a resegmentation of the entire audio stream by Viterbi alignment to an ergodic HMM over all speaker models. The merging process stops when a BIC-like criterion BIBREF16 indicates no further gains in the model likelihood. When multiple feature streams are used, as described below, the data is modeled by a weighted combination of separate GMMs for each stream.
No attempt is made to detect overlapping speech; therefore all our results have an error rate floor that corresponds to the proportion of overlapped speech (about 10% in the Denmark data).
Experiments and Results ::: Experiments on RT-07 data
We processed the NIST conference meetings using the weighted delay-and-sum BeamformIt tool BIBREF17, using $N-1$ audio channels at a time, and resulting in $N$ different audio streams. This is the same leave-one-out strategy as described in BIBREF18 for speech recognition. Furthermore, we rotated the choice of reference channel in these runs to further increase diversity among the outputs, as advocated in BIBREF14. We then ran diarization on each of the resulting audio streams, and DOVER on their outputs. Speech activity was obtained from an HMM-based algorithm that was part of the SRI-ICSI meeting recognition system originally used in the RT-07 evaluation BIBREF19.
Three different feature sets were used in diarization:
Mel-frequency cepstral coefficients (MFCCs), 19 dimensions, extracted every 10 ms from the raw waveforms (no beamforming)
MFCCs extracted from the beamformed audio
MFCCs from beamformed audio, augmented with a vector of estimated time-differences-of-arrival (TDOAs) between the different channels, following BIBREF20
Table TABREF17 shows the outcomes. The first three result columns give speaker error rates for the individual audio channels. Note that the “min” value is an oracle result, i.e., the best that one could do by picking a single channel for diarization. The last two columns give the speaker error and overall DER for the DOVER-combined diarization output. Note that the difference between speaker error and DER is nearly constant, since all systems use the same speech activity information. The missed speech rate is about 3.9%, while the false alarm rate is 4.6%.
Looking at the first three columns, we observe that the range of error rates is very large (greater than 10% absolute) depending on which channel is chosen. The DOVER-generated diarization has error rates that are closer to the oracle choice (minimum error) than to the average error, thereby avoiding the risk of a poor choice of channel.
Experiments and Results ::: Experiments on Project Denmark data
For experiments on this dataset, we used byproducts of the Project Denmark meeting transcription system described in BIBREF14. The system aligns the (possibly unsynchronized) audio streams, and then performs leave-one-out beamforming on 6 out of 7 audio streams, round-robin, resulting in 7 different new audio streams. For purposes of speaker identification, it then computes 128-dimensional d-vectors (acoustic speaker embeddings from a neural network trained to perform speaker ID BIBREF22) at 320 ms intervals. The beamformed audio streams are also transcribed by a speech recognition (ASR) system. Here we use the ASR output only as a speech activity detector (joining words separated by no more than 0.1 s of nonspeech, and padding 0.5 s at the boundaries).
While the Denmark system currently performs speaker identification using enrolled profiles, we are simulating a scenario where speaker-agnostic diarization is applied instead (or in addition, e.g, if only a subset of speakers is known to the system). Since the Denmark audio channels are symmetrical, and no audio channel has privileged status, we would have to either select one channel for diarization, or perform diarization on all channels and combine the outputs; this is where DOVER naturally finds an application.
We ran experiments with three sets of acoustic features, all extracted from the beamformed audio:
MFCCs, 19 dimensions, extracted every 10 ms
MFCCs plus the first 30 principal components of the d-vectors (replicated to match the frame-rate of the MFCCs)
MFCCs plus $3 \times 30$ principal components from 3 out of the 7 d-vector streams, i.e., a partial feature-level combination of audio streams. For channel $i$ the d-vectors were taken from channel $i$ itself, $i-1\pmod {7}$, and $i+1\pmod {7}$.
We also took the outputs of the speaker ID component of the system (from each beamformed audio channel), treated them as diarization labels, and ran DOVER to see if the algorithm could improve the results.
Table TABREF23 shows the results for diarization based on the three feature set, as well as based on speaker ID, using the same format as for the RT-07 results. Here, too, the difference between speaker error and DER is nearly constant, since all systems use the same speech activity information derived from the speech recognizer. The DER thus includes about 0.6% false alarms and 11.3% miss rate (of which 10.0% are due to overlapped speech, which we do not attempt to detect).
The most important observation is that the DOVER output has a speaker error rate that is very close to, and for the most part slightly lower than, the best (oracle) choice of channel. As for the RT-07 data, the DOVER output is consistently much better than the channel average. Also, the max values show that there is still ample opportunity for very poor choices of a single channel; DOVER removes the need to make that choice.
The last row of results shows that even when the diarization on individual channels is very accurate (due to the availability of speaker models), DOVER can still give a substantial relative error reduction, surpassing the best channel's performance.
Conclusions and Outlook
We have presented a weighted voting algorithm for combining the outputs from several diarization systems over a shared input. The DOVER algorithm first uses a DER-minimizing criterion to map all speaker labels to a common name space, and then performs majority voting at each time instant (including on whether there is speech or not). The proposed method naturally lends itself to unifying diarization outputs obtained from parallel audio channels, e.g., as they arise from meeting capture with multiple microphones or devices. We tested the algorithm on a NIST conference meeting evaluation set, as well as on internal meetings, using diarization by agglomerative clustering combined with a variety of feature streams. We find that the DOVER output consistently beats the averages of the input channels, and can be very close or improving on the oracle error rate obtained by picking the single best channel for a given meeting.
Some interesting open issues remain. As mentioned, we currently do not attempt to diarize overlapping speech. Once such a capability is available, the DOVER algorithm will have to be modified to handle simultaneous speakers. Another issue is that current diarization systems only output their single best guesses at the speaker labeling. In analogy to confusion network combination, we may want to consider diarization algorithms that produce multiple weighted hypotheses, which are then in turn combined across all systems. A modified DOVER could be used both to generate the “speaker confusion networks” from individual diarization systems, and to combine them.
Acknowledgments
We thank our colleagues for help with the Denmark system and data collection, Xavi Anguera for answering questions regarding BeamformIt, and ICSI for assistance with the RT-07 data. | No |
4e4946c023211712c782637fcca523deb126e519 | 4e4946c023211712c782637fcca523deb126e519_0 | Q: How do they assign weights between votes in their DOVER algorithm?
Text: Introduction
Speaker diarization is the task of segmenting an audio recording in time, indexing each segment by speaker identity. In the standard version of the task BIBREF0, the goal is not to identify known speakers, but to co-index segments that are attributed to the same speaker; in other words, the task implies finding speaker boundaries and grouping segments that belong to the same speaker (including determining the number of distinct speakers). Often diarization is run, in parallel or in sequence, with speech recognition with the goal of achieving speaker-attributed speech-to-text transcription BIBREF1.
Ensemble classifiers BIBREF2 are a common way of boosting the performance of machine learning systems, by pooling the outputs of multiple classifiers. In speech processing, they have been used extensively whenever multiple, separately trained speech recognizers are available, and the goal is to achieve better performance with little additional integration or modeling overhead. The most well-known of these methods in speech processing is ROVER (recognition output voting for error reduction) BIBREF3. ROVER aligns the outputs of multiple recognizers word-by-word, and then decides on the most probable word at each position by simple majority or confidence-weighted vote. Confusion network combination (CNC) is a generalization of this idea that makes use of multiple word hypotheses (e.g., in lattice or n-best form) from each recognizer BIBREF4, BIBREF5.
Given the pervasive use and effectiveness of ensemble methods, it is perhaps surprising that so far no ensemble algorithm has been used widely for diarization. In this paper we present such an algorithm and apply it to the problem of combining the diarization output obtained from parallel recording channels. This scenario arises naturally when processing speech captured by multiple microphones, even when the raw signals are combined using beamforming (because multiple beams can be formed and later combined for improved accuracy, as described in BIBREF6). In a nod to the ROVER algorithm, we call the algorithm DOVER (diarization output voting for error reduction). As discussed later, while DOVER is not a variant of ROVER, a duality can be observed between the two algorithms.
Section SECREF2 presents the DOVER algorithm. Section SECREF3 describes the experiments we ran to test it on two different datasets involving multi-microphone speech capture. Section SECREF4 concludes and points out open problems and future directions.
The Algorithm ::: Motivation and prior work
The reason that combining diarization outputs in a ROVER-like manner is not straightforward is the complex structure of the task: a diarization system has to perform segmentation (finding speaker boundaries) and decisions about identity of speakers across segments. Where those functions are performed by specialized classifiers inside the diarization algorithm, ensemble methods could easily be used. For example, multiple speaker change detectors could vote on a consensus, or a speaker clustering algorithm could combine multiple acoustic embeddings to evaluate cluster similarity BIBREF7.
However, if we are given only the outputs of multiple diarization processes for the same input, or the diarization systems are only available as black boxes, it is not clear on what part of the output one should “vote”, and how to combine the various hypotheses.
One approach would be to solve diarization as an integer linear programming (ILP) problem BIBREF8. In ILP-based diarization, a speaker labeling is found that is the best fit to a collection of local measures of speaker similarity (i.e., the similarity of speech at times $i$ and $j$ is commensurate with the cost of assigning different speaker labels to $i$ and $j$). We could translate the different diarization outputs into a set of local similarity costs, pool the costs that pertain to the same locations of speech, and then find a new diarization labeling with ILP. A similar approach has been used for ensemble segmentation of images BIBREF9. However, ILP is computationally costly and therefore not widely used in diarization practice.
The prior method that comes closest to our purpose is a proposal by Tranter BIBREF10, in which pairs of diarization outputs are combined. The method identifies regions in the audio on which both input diarizations agree, and passes them through to the output. Disagreements between the inputs are adjudicated by evaluating speaker identity/nonidentity according to an external classifier (typically a version of the Bayes information criterion, BIC BIBREF11). Our goal in this work is to reconcile an arbitrary number of diarization outputs, and to do so using only the outputs themselves, without requiring further examination of the acoustic evidence.
The Algorithm ::: The DOVER approach
Our algorithm maps the anonymous speaker labels from multiple diarization outputs into a common label space, and then performs a simple voting for each region of audio. A “region” for this purpose is a maximal segment delimited by any of the original speaker boundaries, from any of the input segmentations. The combined (or consensus) labeling is then obtained by stringing the majority labels for all regions together.
The remaining question is how labels are to be mapped to a common label space. We do so by using the same criterion as used by the diarization error (DER) metric itself, since the goal of the algorithm is to minimize the expected mismatch between two diarization label sequences. Given two diarization outputs using labels $A_1, A_2, \ldots , A_m$ and $B_1, B_2, \ldots , B_n$, respectively, an injective mapping from $\lbrace A_i\rbrace $ to $\lbrace B_j \rbrace $ is found that minimizes the total time duration of speaker mismatches, as well as mismatches between speech and nonspeech. Any labels that have no correspondence (e.g., due to differing numbers of speakers) are retained. For more than two diarization outputs, a global mapping is constructed incrementally: after mapping the second output to the labels of the first, the third output is mapped to the first two. This is repeated until all diarization outputs are incorporated. Whenever there is a conflict arising from mapping the $i$th output to each of the prior $i-1$ outputs, it is resolved in favor of the label pairing sharing the longest common duration (overlap in time).
Speech/nonspeech decisions are aggregated by outputting a speaker label if and only if the total vote tally for all speaker labels is at least half the total of all inputs, i.e., the probability of speech is $\ge 0.5$.
It is straightforward to generalize the algorithm to weighted inputs. Instead of each input diarization having equal weight (one system, one vote), the final voting step adds up the weights attached to the individual systems; the winning label again is the one with the highest tally. The weighted-voting version of the algorithm is spelled out in detail in Figure FIGREF5.
The Algorithm ::: An example
Figure FIGREF7 shows the workings of the algorithm for three inputs (diarization system outputs) A, B, and C. For simplicity, non-speech regions are omitted. Also for simplicity, the inputs are given equal weight. Step 1 shows the original speaker labelings. In Step 2 of the algorithm, the labels from System B have been mapped to labels from System A, using the minimum-diarization-cost criterion. In Step 3, the output of System C has been mapped to the (already mapped, where applicable) outputs from Systems A and B. The result is that all three diarization versions now use the same labels where possible, and in the final step (voting) the consensus labels are determined by taking the majority label for each segmentation region.
Note that the final output contains one region (shown in blue shading) for which no majority label exists, since each of the labels “A1”, “A2” and “C2” had only one vote. In our experiments, we break ties by picking the first label. Alternatively, a random label could be picked, or the region in question could be apportioned equally to the competing labels (e.g., choosing a temporal ordering that minimizes speaker changes).
The Algorithm ::: Anchoring the label mapping
The construction of the global label mapping is greedy, and dependent on the ordering of input systems. (A non-greedy, global optimization of the label mapping for all $N$ inputs would be exponential in the number of inputs $N$.) The choice of the first input, in particular, could affect the quality of results, since it anchors the computation of all label mappings. One strategy is to pick the centroid, i.e., the diarization hypothesis that has the smallest aggregate distance (DER) to all the other diarization outputs. Another, more costly, approach is to run the algorithm $N$ times, once for each input as the anchor. Then, the $N$ DOVER outputs are themselves combined again (with equal weights) in another run of the algorithm. For $N$ inputs, this multiplies the overall computation by a factor of $N+1$.
In our experiments we use a variant of the centroid approach: The input diarization hypotheses are ranked by their average DER to all the other hypotheses. The result is that the centroid comes first, but outlier hypotheses also tend to end up at the bottom of the ranking. We then apply weights to the hypotheses that decay slowly from 1, as a function of rank:
The effect of this is that two lower-ranked hypotheses that agree can still override a single higher-ranked hypothesis, but ties are broken in favor of the higher-ranked hypothesis. (If the inputs came with externally supplied ranks, we multiply them with the rank-based weights.)
The Algorithm ::: Duality of DOVER and ROVER
ROVER and DOVER solve different kinds of tasks: the former manipulates words labels at discrete positions in a sequence, whereas the latter manipulates anonymous speaker labels positioned on a continuous time axis. However, there is an interesting duality between the two algorithms.
In ROVER, the input (word) labels already live in a common name space (the vocabulary) and need to be aligned in time. In DOVER, the input (speaker) labels live on a common time axis and need to be aligned in a common name space (mapped). After those two kinds of label alignment are completed, the voting step is similar in the two algorithms. Note, also, that the distinction between word sequence and label alignment mirrors the different error metrics. Word error is mediated by a string alignment that minimizes edit distance. Diarization error is mediated by a speaker label alignment (i.e., mapping) that minimizes the sum of speaker and speech/nonspeech error.
Experiments and Results ::: Data
We validated the DOVER algorithm on two datasets of meeting recordings with multi microphone channels. Our focus on this genre of speech is motivated by our overall interest in technology that can create high-quality speaker-attributed transcripts of multi-person meetings.
The first dataset was drawn from the NIST 2007 Rich Transcription (RT-07) evaluation BIBREF13. The RT-07 “conference meeting” test set consists of 8 meetings from four different recording sites, of varying lengths and with the number of microphones ranging from 3 to 16. Each meeting has from four to six participants, with 31 distinct speakers in total. Diarization error is evaluated on a 22-minute speaker-labeled excerpt from each meeting.
The second dataset consists of 5 internal meetings used in Microsoft's “Project Denmark” BIBREF6. Three of the five meetings were recorded with seven independent consumer devices, followed by automatic synchronization as described in BIBREF14. The other two meetings were recorded with a seven-channel circular microphone array. The meetings took place in several different rooms and lasted for 30 minutes to one hour each, with three to eleven participants per meeting. The meetings were neither scripted nor staged; the participants were familiar with each other and conducted normal work discussions. The diarization reference labels were derived from time- and speaker-marked transcripts created by professional transcribers based on both close-talking and far-field recordings.
Experiments and Results ::: Diarization system
All original diarization outputs for input to DOVER were created with a reimplementation of the ICSI diarization algorithm BIBREF15. The algorithm starts with a uniform segmentation of the audio into snippets of equal duration where each segment constitutes its own speaker cluster, followed by iterative agglomerative clustering and resegmentation. Distance between speaker clusters is measured by the log likelihood difference between a single-speaker hypothesis (one Gaussian mixture model) versus the two-speaker hypothesis (two GMMs). In each iteration, the two most similar speaker clusters are merged, followed by a resegmentation of the entire audio stream by Viterbi alignment to an ergodic HMM over all speaker models. The merging process stops when a BIC-like criterion BIBREF16 indicates no further gains in the model likelihood. When multiple feature streams are used, as described below, the data is modeled by a weighted combination of separate GMMs for each stream.
No attempt is made to detect overlapping speech; therefore all our results have an error rate floor that corresponds to the proportion of overlapped speech (about 10% in the Denmark data).
Experiments and Results ::: Experiments on RT-07 data
We processed the NIST conference meetings using the weighted delay-and-sum BeamformIt tool BIBREF17, using $N-1$ audio channels at a time, and resulting in $N$ different audio streams. This is the same leave-one-out strategy as described in BIBREF18 for speech recognition. Furthermore, we rotated the choice of reference channel in these runs to further increase diversity among the outputs, as advocated in BIBREF14. We then ran diarization on each of the resulting audio streams, and DOVER on their outputs. Speech activity was obtained from an HMM-based algorithm that was part of the SRI-ICSI meeting recognition system originally used in the RT-07 evaluation BIBREF19.
Three different feature sets were used in diarization:
Mel-frequency cepstral coefficients (MFCCs), 19 dimensions, extracted every 10 ms from the raw waveforms (no beamforming)
MFCCs extracted from the beamformed audio
MFCCs from beamformed audio, augmented with a vector of estimated time-differences-of-arrival (TDOAs) between the different channels, following BIBREF20
Table TABREF17 shows the outcomes. The first three result columns give speaker error rates for the individual audio channels. Note that the “min” value is an oracle result, i.e., the best that one could do by picking a single channel for diarization. The last two columns give the speaker error and overall DER for the DOVER-combined diarization output. Note that the difference between speaker error and DER is nearly constant, since all systems use the same speech activity information. The missed speech rate is about 3.9%, while the false alarm rate is 4.6%.
Looking at the first three columns, we observe that the range of error rates is very large (greater than 10% absolute) depending on which channel is chosen. The DOVER-generated diarization has error rates that are closer to the oracle choice (minimum error) than to the average error, thereby avoiding the risk of a poor choice of channel.
Experiments and Results ::: Experiments on Project Denmark data
For experiments on this dataset, we used byproducts of the Project Denmark meeting transcription system described in BIBREF14. The system aligns the (possibly unsynchronized) audio streams, and then performs leave-one-out beamforming on 6 out of 7 audio streams, round-robin, resulting in 7 different new audio streams. For purposes of speaker identification, it then computes 128-dimensional d-vectors (acoustic speaker embeddings from a neural network trained to perform speaker ID BIBREF22) at 320 ms intervals. The beamformed audio streams are also transcribed by a speech recognition (ASR) system. Here we use the ASR output only as a speech activity detector (joining words separated by no more than 0.1 s of nonspeech, and padding 0.5 s at the boundaries).
While the Denmark system currently performs speaker identification using enrolled profiles, we are simulating a scenario where speaker-agnostic diarization is applied instead (or in addition, e.g, if only a subset of speakers is known to the system). Since the Denmark audio channels are symmetrical, and no audio channel has privileged status, we would have to either select one channel for diarization, or perform diarization on all channels and combine the outputs; this is where DOVER naturally finds an application.
We ran experiments with three sets of acoustic features, all extracted from the beamformed audio:
MFCCs, 19 dimensions, extracted every 10 ms
MFCCs plus the first 30 principal components of the d-vectors (replicated to match the frame-rate of the MFCCs)
MFCCs plus $3 \times 30$ principal components from 3 out of the 7 d-vector streams, i.e., a partial feature-level combination of audio streams. For channel $i$ the d-vectors were taken from channel $i$ itself, $i-1\pmod {7}$, and $i+1\pmod {7}$.
We also took the outputs of the speaker ID component of the system (from each beamformed audio channel), treated them as diarization labels, and ran DOVER to see if the algorithm could improve the results.
Table TABREF23 shows the results for diarization based on the three feature set, as well as based on speaker ID, using the same format as for the RT-07 results. Here, too, the difference between speaker error and DER is nearly constant, since all systems use the same speech activity information derived from the speech recognizer. The DER thus includes about 0.6% false alarms and 11.3% miss rate (of which 10.0% are due to overlapped speech, which we do not attempt to detect).
The most important observation is that the DOVER output has a speaker error rate that is very close to, and for the most part slightly lower than, the best (oracle) choice of channel. As for the RT-07 data, the DOVER output is consistently much better than the channel average. Also, the max values show that there is still ample opportunity for very poor choices of a single channel; DOVER removes the need to make that choice.
The last row of results shows that even when the diarization on individual channels is very accurate (due to the availability of speaker models), DOVER can still give a substantial relative error reduction, surpassing the best channel's performance.
Conclusions and Outlook
We have presented a weighted voting algorithm for combining the outputs from several diarization systems over a shared input. The DOVER algorithm first uses a DER-minimizing criterion to map all speaker labels to a common name space, and then performs majority voting at each time instant (including on whether there is speech or not). The proposed method naturally lends itself to unifying diarization outputs obtained from parallel audio channels, e.g., as they arise from meeting capture with multiple microphones or devices. We tested the algorithm on a NIST conference meeting evaluation set, as well as on internal meetings, using diarization by agglomerative clustering combined with a variety of feature streams. We find that the DOVER output consistently beats the averages of the input channels, and can be very close or improving on the oracle error rate obtained by picking the single best channel for a given meeting.
Some interesting open issues remain. As mentioned, we currently do not attempt to diarize overlapping speech. Once such a capability is available, the DOVER algorithm will have to be modified to handle simultaneous speakers. Another issue is that current diarization systems only output their single best guesses at the speaker labeling. In analogy to confusion network combination, we may want to consider diarization algorithms that produce multiple weighted hypotheses, which are then in turn combined across all systems. A modified DOVER could be used both to generate the “speaker confusion networks” from individual diarization systems, and to combine them.
Acknowledgments
We thank our colleagues for help with the Denmark system and data collection, Xavi Anguera for answering questions regarding BeamformIt, and ICSI for assistance with the RT-07 data. | Unanswerable |
144714fe0d5a2bb7e21a7bf50df39d790ff12916 | 144714fe0d5a2bb7e21a7bf50df39d790ff12916_0 | Q: What are state of the art methods authors compare their work with?
Text: Introduction
Flexibility and ease of access to social media have resulted in the use of online channels for news access by a great number of people. For example, nearly two-thirds of American adults have access to news by online channels BIBREF0, BIBREF1. BIBREF2 also reported that social media and news consumption is significantly increased in Great Britain.
In comparison to traditional media, social networks have proved to be more beneficial, especially during a crisis, because of the ability to spread breaking news much faster BIBREF3. All of the news, however, is not real and there is a possibility of changing and manipulating real information by people due to political, economic, or social motivations. This manipulated data leads to the creation of news that may not be completely true or may not be completely false BIBREF4. Therefore, there is misleading information on social media that has the potential to cause many problems in society. Such misinformation, called fake news, has a wide variety of types and formats. Fake advertisements, false political statements, satires, and rumors are examples of fake news BIBREF0. This widespread of fake news that is even more than mainstream media BIBREF5 motivated many researchers and practitioners to focus on presenting effective automatic frameworks for detecting fake news BIBREF6. Google has announced an online service called “Google News Initiative” to fight fake news BIBREF7. This project will try to help readers for realizing fake news and reports BIBREF8.
Detecting fake news is a challenging task. A fake news detection model tries to predict intentionally misleading news based on analyzing the real and fake news that previously reviewed. Therefore, the availability of high-quality and large-size training data is an important issue.
The task of fake news detection can be a simple binary classification or, in a challenging setting, can be a fine-grained classification BIBREF9. After 2017, when fake news datasets were introduced, researchers tried to increase the performance of their models using this data. Kaggle dataset, ISOT dataset, and LIAR dataset are some of the most well-known publicly available datasets BIBREF10.
In this paper, we propose a new model based on capsule neural networks for detecting fake news. We propose architectures for detecting fake news in different lengths of news statements by using different varieties of word embedding and applying different levels of n-gram as feature extractors. We show these proposed models achieve better results in comparison to the state-of-the-art methods.
The rest of the paper is organized as follows: Section SECREF2 reviews related work about fake news detection. Section SECREF3 presents the model proposed in this paper. The datasets used for fake news detection and evaluation metrics are introduced in Section SECREF4. Section SECREF5 reports the experimental results, comparison with the baseline classification and discussion. Section SECREF6 summarizes the paper and concludes this work.
Related work
Fake news detection has been studied in several investigations. BIBREF11 presented an overview of deception assessment approaches, including the major classes and the final goals of these approaches. They also investigated the problem using two approaches: (1) linguistic methods, in which the related language patterns were extracted and precisely analyzed from the news content for making decision about it, and (2) network approaches, in which the network parameters such as network queries and message metadata were deployed for decision making about new incoming news.
BIBREF12 proposed an automated fake news detector, called CSI that consists of three modules: Capture, Score, and Integrate, which predicts by taking advantage of three features related to the incoming news: text, response, and source of it. The model includes three modules; the first one extracts the temporal representation of news articles, the second one represents and scores the behavior of the users, and the last module uses the outputs of the first two modules (i.e., the extracted representations of both users and articles) and use them for the classification. Their experiments demonstrated that CSI provides an improvement in terms of accuracy.
BIBREF13 introduced a new approach which tries to decide if a news is fake or not based on the users that interacted with and/or liked it. They proposed two classification methods. The first method deploys a logistic regression model and takes the user interaction into account as the features. The second one is a novel adaptation of the Boolean label crowdsourcing techniques. The experiments showed that both approaches achieved high accuracy and proved that considering the users who interact with the news is an important feature for making a decision about that news.
BIBREF14 introduced two new datasets that are related to seven different domains, and instead of short statements containing fake news information, their datasets contain actual news excerpts. They deployed a linear support vector machine classifier and showed that linguistic features such as lexical, syntactic, and semantic level features are beneficial to distinguish between fake and genuine news. The results showed that the performance of the developed system is comparable to that of humans in this area.
BIBREF15 provided a novel dataset, called LIAR, consisting of 12,836 labeled short statements. The instances in this dataset are chosen from more natural contexts such as Facebook posts, tweets, political debates, etc. They proposed neural network architecture for taking advantage of text and meta-data together. The model consists of a Convolutional Neural Network (CNN) for feature extraction from the text and a Bi-directional Long Short Term Memory (BiLSTM) network for feature extraction from the meta-data and feeds the concatenation of these two features into a fully connected softmax layer for making the final decision about the related news. They showed that the combination of metadata with text leads to significant improvements in terms of accuracy.
BIBREF16 proved that incorporating speaker profiles into an attention-based LSTM model can improve the performance of a fake news detector. They claim speaker profiles can contribute to the model in two different ways. First, including them in the attention model. Second, considering them as additional input data. They used party affiliation, speaker location, title, and credit history as speaker profiles, and they show this metadata can increase the accuracy of the classifier on the LIAR dataset.
BIBREF17 presented a new dataset for fake news detection, called ISOT. This dataset was entirely collected from real-world sources. They used n-gram models and six machine learning techniques for fake news detection on the ISOT dataset. They achieved the best performance by using TF-IDF as the feature extractor and linear support vector machine as the classifier.
BIBREF18 proposed an end-to-end framework called event adversarial neural network, which is able to extract event-invariant multi-modal features. This model has three main components: the multi-modal feature extractor, the fake news detector, and the event discriminator. The first component uses CNN as its core module. For the second component, a fully connected layer with softmax activation is deployed to predict if the news is fake or not. As the last component, two fully connected layers are used, which aims at classifying the news into one of K events based on the first component representations.
BIBREF19 developed a tractable Bayesian algorithm called Detective, which provides a balance between selecting news that directly maximizes the objective value and selecting news that aids toward learning user's flagging accuracy. They claim the primary goal of their works is to minimize the spread of false information and to reduce the number of users who have seen the fake news before it becomes blocked. Their experiments show that Detective is very competitive against the fictitious algorithm OPT, an algorithm that knows the true users’ parameters, and is robust in applying flags even in a setting where the majority of users are adversarial.
Capsule networks for fake news detection
In this section, we first introduce different variations of word embedding models. Then, we proposed two capsule neural network models according to the length of the news statements that incorporate different word embedding models for fake news detection.
Capsule networks for fake news detection ::: Different variations of word embedding models
Dense word representation can capture syntactic or semantic information from words. When word representations are demonstrated in low dimensional space, they are called word embedding. In these representations, words with similar meanings are in close position in the vector space.
In 2013, BIBREF20 proposed word2vec, which is a group of highly efficient computational models for learning word embeddings from raw text. These models are created by training neural networks with two-layers trained by a large volume of text. These models can produce vector representations for every word with several hundred dimensions in a vector space. In this space, words with similar meanings are mapped to close coordinates.
There are some pre-trained word2vec vectors like 'Google News' that was trained on 100 billion words from Google news. One of the popular methods to improve text processing performance is using these pre-trained vectors for initializing word vectors, especially in the absence of a large supervised training set. These distributed vectors can be fed into deep neural networks and used for any text classification task BIBREF21. These pre-trained embeddings, however, can further be enhanced.
BIBREF21 applied different learning settings for vector representation of words via word2vec for the first time and showed their superiority compared to the regular pre-trained embeddings when they are used within a CNN model. These settings are as follow:
Static word2vec model: in this model, pre-trained vectors are used as input to the neural network architecture, these vectors are kept static during training, and only the other parameters are learned.
Non-static word2vec model: this model uses the pre-trained vectors at the initialization of learning, but during the training phase, these vectors are fine-tuned for each task using the training data of the target task.
Multichannel word2vec model: the model uses two sets of static and non-static word2vec vectors, and a part of vectors fine-tune during training.
Capsule networks for fake news detection ::: Proposed model
Although different models based on deep neural networks have been proposed for fake news detection, there is still a great need for further improvements in this task. In the current research, we aim at using capsule neural networks to enhance the accuracy of fake news identification systems.
The capsule neural network was introduced by BIBREF22 for the first time in the paper called “Dynamic Routing Between Capsules”. In this paper, they showed that capsule network performance for MNIST dataset on highly overlapping digits could work better than CNNs. In computer vision, a capsule network is a neural network that tries to work inverse graphics. In a sense, the approach tries to reverse-engineer the physical process that produces an image of the world BIBREF23.
The capsule network is composed of many capsules that act like a function, and try to predict the instantiation parameters and presence of a particular object at a given location.
One key feature of capsule networks is equivariance, which aims at keeping detailed information about the location of the object and its pose throughout the network. For example, if someone rotates the image slightly, the activation vectors also change slightly BIBREF24. One of the limitations of a regular CNN is losing the precise location and pose of the objects in an image. Although this is not a challenging issue when classifying the whole image, it can be a bottleneck for image segmentation or object detection that needs precise location and pose. A capsule, however, can overcome this shortcoming in such applications BIBREF24.
Capsule networks have recently received significant attention. This model aims at improving CNNs and RNNs by adding the following capabilities to each source, and target node: (1) the source node has the capability of deciding about the number of messages to transfer to target nodes, and (2) the target node has the capability of deciding about the number of messages that may be received from different source nodes BIBREF25.
After the success of capsule networks in computer vision tasks BIBREF26, BIBREF27, BIBREF28, capsule networks have been used in different NLP tasks, including text classification BIBREF29, BIBREF30, multi-label text classification BIBREF31, sentiment analysis BIBREF18, BIBREF32, identifying aggression and toxicity in comments BIBREF33, and zero-shot user intent detection BIBREF34.
In capsule networks, the features that are extracted from the text are encapsulated into capsules (groups of neurons). The first work that applied capsule networks for text classification was done by BIBREF35. In their research, the performance of the capsule network as a text classification network was evaluated for the first time. Their capsule network architecture includes a standard convolutional layer called n-gram convolutional layer that works as a feature extractor. The second layer is a layer that maps scalar-valued features into a capsule representation and is called the primary capsule layer. The outputs of these capsules are fed to a convolutional capsule layer. In this layer, each capsule is only connected to a local region in the layer below. In the last step, the output of the previous layer is flattened and fed through a feed-forward capsule layer. For this layer, every capsule of the output is considered as a particular class. In this architecture, a max-margin loss is used for training the model. Figure FIGREF6 shows the architecture proposed by BIBREF35.
Some characteristics of capsules make them suitable for presenting a sentence or document as a vector for text classification. These characteristics include representing attributes of partial entities and expressing semantic meaning in a wide space BIBREF29.
For fake news identification with different length of statements, our model benefits from several parallel capsule networks and uses average pooling in the last stage. With this architecture, the models can learn more meaningful and extensive text representations on different n-gram levels according to the length of texts.
Depending on the length of the news statements, we use two different architectures. Figure FIGREF7 depicts the structure of the proposed model for medium or long news statements. In the model, a non-static word embedding is used as an embedding layer. In this layer, we use 'glove.6B.300d' as a pre-trained word embedding, and use four parallel networks by considering four different filter sizes 2,3,4,5 as n-gram convolutional layers for feature extraction. In the next layers, for each parallel network, there is a primary capsule layer and a convolutional capsule layer, respectively, as presented in Figure FIGREF6. A fully connected capsule layer is used in the last layer for each parallel network. At the end, the average polling is added for producing the final result.
For short news statements, due to the limitation of word sequences, a different structure has been proposed. The layers are like the first model, but only two parallel networks are considered with 3 and 5 filter sizes. In this model, a static word embedding is used. Figure FIGREF8 shows the structure of the proposed model for short news statements.
Evaluation ::: Dataset
Several datasets have been introduced for fake news detection. One of the main requirements for using neural architectures is having a large dataset to train the model. In this paper, we use two datasets, namely ISOT fake news BIBREF17 and LIAR BIBREF15, which have a large number of documents for training deep models. The length of news statements for ISOT is medium or long, and LIAR is short.
Evaluation ::: Dataset ::: The ISOT fake news dataset
In 2017, BIBREF17 introduced a new dataset that was collected from real-world sources. This dataset consists of news articles from Reuters.com and Kaggle.com for real news and fake news, respectively. Every instance in the dataset is longer than 200 characters. For each article, the following metadata is available: article type, article text, article title, article date, and article label (fake or real). Table TABREF12 shows the type and size of the articles for the real and fake categories.
Evaluation ::: Dataset ::: The LIAR dataset
As mentioned in Section SECREF2, one of the recent well-known datasets, is provided by BIBREF15. BIBREF15 introduced a new large dataset called LIAR, which includes 12.8K human-labeled short statements from POLITIFACT.COM API. Each statement is evaluated by POLITIFACT.COM editor for its validity. Six fine-grained labels are considered for the degree of truthfulness, including pants-fire, false, barely-true, half-true, mostly-true, and true. The distribution of labels in this dataset are as follows: 1,050 pants-fire labels and a range of 2,063 to 2,638 for other labels.
In addition to news statements, this dataset consists of several metadata as speaker profiles for each news item. These metadata include valuable information about the subject, speaker, job, state, party, and total credit history count of the speaker of the news. The total credit history count, including the barely-true counts, false counts, half-true counts, mostly-true counts, and pants-fire counts. The statistics of LIAR dataset are shown in Table TABREF14. Some excerpt samples from the LIAR dataset are presented in Table TABREF15.
Evaluation ::: Experimental setup
The experiments of this paper were conducted on a PC with Intel Core i7 6700k, 3.40GHz CPU; 16GB RAM; Nvidia GeForce GTX 1080Ti GPU in a Linux workstation. For implementing the proposed model, the Keras library BIBREF36 was used, which is a high-level neural network API.
Evaluation ::: Evaluation metrics
The evaluation metric in our experiments is the classification accuracy. Accuracy is the ratio of correct predictions to the total number of samples and is computed as:
Where TP is represents the number of True Positive results, FP represents the number of False Positive results, TN represents the number of True Negative results, and FN represents the number of False Negative results.
Results
For evaluating the effectiveness of the proposed model, a series of experiments on two datasets were performed. These experiments are explained in this section and the results are compared to other baseline methods. We also discuss the results for every dataset separately.
Results ::: Classification for ISOT dataset
As mentioned in Section SECREF4, BIBREF17 presented the ISOT dataset. According to the baseline paper, we consider 1000 articles for every set of real and fake articles, a total of 2000 articles for the test set, and the model is trained with the rest of the data.
First, the proposed model is evaluated with different word embeddings that described in Section SECREF1. Table TABREF20 shows the result of applying different word embeddings for the proposed model on ISOT, which consists of medium and long length news statements. The best result is achieved by applying the non-static embedding.
BIBREF17 evaluated different machine learning methods for fake news detection on the ISOT dataset, including the Support Vector Machine (SVM), the Linear Support Vector Machine (LSVM), the K-Nearest Neighbor (KNN), the Decision Tree (DT), the Stochastic Gradient Descent (SGD), and the Logistic regression (LR) methods.
Table TABREF21 shows the performance of non-static capsule network for fake news detection in comparison to other methods. The accuracy of our model is 7.8% higher than the best result achieved by LSVM.
Results ::: Discussion
The proposed model can predict true labels with high accuracy reaching in a very small number of wrong predictions. Table TABREF23 shows the titles of two wrongly predicted samples for detecting fake news. To have an analysis on our results, we investigate the effects of sample words that are represented in training statements that tagged as real and fake separately.
For this work, all of the words and their frequencies are extracted from the two wrong samples and both real and fake labels of the training data. Table TABREF24 shows the information of this data. Then for every wrongly predicted sample, stop-words are omitted, and words with a frequency of more than two are listed. After that, all of these words and their frequency in real and fake training datasets are extracted. In this part, the frequencies of these words are normalized. Table TABREF25 and Table TABREF28 show the normalized frequencies of words for each sample respectably. In these tables, for ease of comparison, the normalized frequencies of real and fake labels of training data and the normalized frequency for each word in every wrong sample are multiplied by 10.
The label of Sample 1 is predicted as fake, but it is real. In Table TABREF25, six most frequent words of Sample 1 are listed, the word "tax" is presented 2 times more than each of the other words in Sample 1, and this word in the training data with real labels is obviously more frequent. In addition to this word, for other words like "state", the same observation exists.
The text of Sample 2 is predicted as real news, but it is fake. Table TABREF28 lists six frequent words of Sample 2. The two most frequent words of this text are "trump" and "sanders". These words are more frequent in training data with fake labels than the training data with real labels. "All" and "even" are two other frequent words, We use "even" to refer to something surprising, unexpected, unusual or extreme and "all" means every one, the complete number or amount or the whole. therefore, a text that includes these words has more potential to classify as a fake news. These experiments show the strong effect of the sample words frequency on the prediction of the labels.
Results ::: Classification for the LIAR dataset
As mentioned in Section SECREF13, the LIAR dataset is a multi-label dataset with short news statements. In comparison to the ISOT dataset, the classification task for this dataset is more challenging. We evaluate the proposed model while using different metadata, which is considered as speaker profiles. Table TABREF30 shows the performance of the capsule network for fake news detection by adding every metadata. The best result of the model is achieved by using history as metadata. The results show that this model can perform better than state-of-the-art baselines including hybrid CNN BIBREF15 and LSTM with attention BIBREF16 by 3.1% on the validation set and 1% on the test set.
Results ::: Classification for the LIAR dataset ::: Discussion
Figure FIGREF32 shows the confusion matrix of the best classification using the proposed model for the test set. The model classifies false, half-true, and mostly-true news with more accuracy. Nevertheless, it is difficult to distinguish between true and mostly-true and also between barely-true and false. The worst accuracy is for classifying pants-fire. For these labels, detecting the correct label is more challenging, and many pants-fire texts are predicted as false.
Conclusion
In this paper, we apply capsule networks for fake news detection. We propose two architectures for different lengths of news statements. We apply two strategies to improve the performance of the capsule networks for the task. First, for detecting the medium or long length of news text, we use four parallel capsule networks that each one extracts different n-gram features (2,3,4,5) from the input texts. Second, we use non-static embedding such that the word embedding model is incrementally up-trained and updated in the training phase.
Moreover, as a fake news detector for short news statements, we use only two parallel networks with 3 and 5 filter sizes as a feature extractor and static model for word embedding. For evaluation, two datasets are used. The ISOT dataset as a medium length or long news text and LIAR as a short statement text. The experimental results on these two well-known datasets showed improvement in terms of accuracy by 7.8% on the ISOT dataset and 3.1% on the validation set and 1% on the test set of the LIAR dataset. | ISOT dataset: LLVM
Liar dataset: Hybrid CNN and LSTM with attention |
f01aa192d97fa3cc36b6e316355dc5da0e9b97dc | f01aa192d97fa3cc36b6e316355dc5da0e9b97dc_0 | Q: What are the baselines model?
Text: Introduction
With more than one hundred thousand new scholarly articles being published each year, there is a rapid growth in the number of citations for the relevant scientific articles. In this context, we highlight the following interesting facts about the process of citing scientific articles: (i) the most commonly cited paper by Gerard Salton, titled “A Vector Space Model for Information Retrieval” (alleged to have been published in 1975) does not actually exist in reality BIBREF0 , (ii) the scientific authors read only 20% of the works they cite BIBREF1 , (iii) one third of the references in a paper are redundant and 40% are perfunctory BIBREF2 , (iv) 62.7% of the references could not be attributed a specific function (definition, tool etc.) BIBREF3 . Despite these facts, the existing bibliographic metrics consider that all citations are equally significant.
In this paper, we would emphasize the fact that all the references of a paper are not equally influential. For instance, we believe that for our current paper, BIBREF4 is more influential reference than BIBREF5 , although the former has received lower citations (9) than the latter (1650) so far. Therefore the influence of a cited paper completely depends upon the context of the citing paper, not the overall citation count of the cited paper. We further took the opinion of the original authors of few selective papers and realized that around 16% of the references in a paper are highly influential, and the rest are trivial (Section SECREF4 ). This motivates us to design a prediction model, GraLap to automatically label the influence of a cited paper with respect to a citing paper. Here, we label paper-reference pairs rather than references alone, because a reference that is influential for one citing paper may not be influential with equal extent for another citing paper.
We experiment with ACL Anthology Network (AAN) dataset and show that GraLap along with the novel feature set, quite efficiently, predicts the intensity of references of papers, which achieves (Pearson) correlation of INLINEFORM0 with the human annotations. Finally, we present four interesting applications to show the efficacy of considering unequal intensity of references, compared to the uniform intensity.
The contributions of the paper are four-fold: (i) we acquire a rich annotated dataset where paper-reference pairs are labeled based on the influence scores (Section SECREF4 ), which is perhaps the first gold-standard for this kind of task; (ii) we propose a graph-based label propagation model GraLap for semi-supervised learning which has tremendous potential for any task where the training set is less in number and labels are non-uniformly distributed (Section SECREF3 ); (iii) we propose a diverse set of features (Section SECREF10 ); most of them turn out to be quite effective to fit into the prediction model and yield improved results (Section SECREF5 ); (iv) we present four applications to show how incorporating the reference intensity enhances the performance of several state-of-the-art systems (Section SECREF6 ).
Defining Intensity of References
All the references of a paper usually do not carry equal intensity/strength with respect to the citing paper because some papers have influenced the research more than others. To pin down this intuition, here we discretize the reference intensity by numerical values within the range of 1 to 5, (5: most influential, 1: least influential). The appropriate definitions of different labels of reference intensity are presented in Figure FIGREF2 , which are also the basis of building the annotated dataset (see Section SECREF4 ):
Note that “reference intensity” and “reference similarity” are two different aspects. It might happen that two similar reference are used with different intensity levels in a citing paper – while one is just mentioned somewhere in the paper and other is used as a baseline. Here, we address the former problem as a semi-supervised learning problem with clues taken from content of the citing and cited papers.
Reference Intensity Prediction Model
In this section, we formally define the problem and introduce our prediction model.
Problem Definition
We are given a set of papers INLINEFORM0 and a sets of references INLINEFORM1 , where INLINEFORM2 corresponds to the set of references (or cited papers) of INLINEFORM3 . There is a set of papers INLINEFORM4 whose references INLINEFORM5 are already labeled by INLINEFORM6 (each reference is labeled with exactly one value). Our objective is to define a predictive function INLINEFORM7 that labels the references INLINEFORM8 of the papers INLINEFORM9 whose reference intensities are unknown, i.e., INLINEFORM10 .
Since the size of the annotated (labeled) data is much smaller than unlabeled data ( INLINEFORM0 ), we consider it as a semi-supervised learning problem.
Definition 1 (Semi-supervised Learning) Given a set of entries INLINEFORM0 and a set of possible labels INLINEFORM1 , let us assume that ( INLINEFORM2 ), ( INLINEFORM3 ),..., ( INLINEFORM4 ) be the set of labeled data where INLINEFORM5 is a data point and INLINEFORM6 is its corresponding label. We assume that at least one instance of each class label is present in the labeled dataset. Let ( INLINEFORM7 ), ( INLINEFORM8 ),..., ( INLINEFORM9 ) be the unlabeled data points where INLINEFORM10 are unknown. Each entry INLINEFORM11 is represented by a set of features INLINEFORM12 . The problem is to determine the unknown labels using INLINEFORM13 and INLINEFORM14 .
GraLap: A Prediction Model
We propose GraLap, a variant of label propagation (LP) model proposed by BIBREF9 where a node in the graph propagates its associated label to its neighbors based on the proximity. We intend to assign same label to the vertices which are closely connected. However unlike the traditional LP model where the original values of the labels continue to fade as the algorithm progresses, we systematically handle this problem in GraLap. Additionally, we follow a post-processing in order to handle “class-imbalance problem”.
Graph Creation. The algorithm starts with the creation of a fully connected weighted graph INLINEFORM0 where nodes are data points and the weight INLINEFORM1 of each edge INLINEFORM2 is determined by the radial basis function as follows:
DISPLAYFORM0
The weight is controlled by a parameter INLINEFORM0 . Later in this section, we shall discuss how INLINEFORM1 is selected. Each node is allowed to propagate its label to its neighbors through edges (the more the edge weight, the easy to propagate).
Transition Matrix. We create a probabilistic transition matrix INLINEFORM0 , where each entry INLINEFORM1 indicates the probability of jumping from INLINEFORM2 to INLINEFORM3 based on the following: INLINEFORM4 .
Label Matrix. Here, we allow a soft label (interpreted as a distribution of labels) to be associated with each node. We then define a label matrix INLINEFORM0 , where INLINEFORM1 th row indicates the label distribution for node INLINEFORM2 . Initially, INLINEFORM3 contains only the values of the labeled data; others are zero.
Label Propagation Algorithm. This algorithm works as follows:
After initializing INLINEFORM0 and INLINEFORM1 , the algorithm starts by disseminating the label from one node to its neighbors (including self-loop) in one step (Step 3). Then we normalize each entry of INLINEFORM2 by the sum of its corresponding row in order to maintain the interpretation of label probability (Step 4). Step 5 is crucial; here we want the labeled sources INLINEFORM3 to be persistent. During the iterations, the initial labeled nodes INLINEFORM4 may fade away with other labels. Therefore we forcefully restore their actual label by setting INLINEFORM5 (if INLINEFORM6 is originally labeled as INLINEFORM7 ), and other entries ( INLINEFORM8 ) by zero. We keep on “pushing” the labels from the labeled data points which in turn pushes the class boundary through high density data points and settles in low density space. In this way, our approach intelligently uses the unlabeled data in the intermediate steps of the learning.
Assigning Final Labels. Once INLINEFORM0 is computed, one may take the most likely label from the label distribution for each unlabeled data. However, this approach does not guarantee the label proportion observed in the annotated data (which in this case is not well-separated as shown in Section SECREF4 ). Therefore, we adopt a label-based normalization technique. Assume that the label proportions in the labeled data are INLINEFORM1 (s.t. INLINEFORM2 . In case of INLINEFORM3 , we try to balance the label proportion observed in the ground-truth. The label mass is the column sum of INLINEFORM4 , denoted by INLINEFORM5 , each of which is scaled in such a way that INLINEFORM6 . The label of an unlabeled data point is finalized as the label with maximum value in the row of INLINEFORM7 .
Convergence. Here we briefly show that our algorithm is guaranteed to converge. Let us combine Steps 3 and 4 as INLINEFORM0 , where INLINEFORM1 . INLINEFORM2 is composed of INLINEFORM3 and INLINEFORM4 , where INLINEFORM5 never changes because of the reassignment. We can split INLINEFORM6 at the boundary of labeled and unlabeled data as follows:
INLINEFORM0
Therefore, INLINEFORM0 , which can lead to INLINEFORM1 , where INLINEFORM2 is the shape of INLINEFORM3 at iteration 0. We need to show INLINEFORM4 . By construction, INLINEFORM5 , and since INLINEFORM6 is row-normalized, and INLINEFORM7 is a part of INLINEFORM8 , it leads to the following condition: INLINEFORM9 . So, DISPLAYFORM0
Therefore, the sum of each row in INLINEFORM0 converges to zero, which indicates INLINEFORM1 .
Selection of INLINEFORM0 . Assuming a spatial representation of data points, we construct a minimum spanning tree using Kruskal's algorithm BIBREF10 with distance between two nodes measured by Euclidean distance. Initially, no nodes are connected. We keep on adding edges in increasing order of distance. We choose the distance (say, INLINEFORM1 ) of the first edge which connects two components with different labeled points in them. We consider INLINEFORM2 as a heuristic to the minimum distance between two classes, and arbitrarily set INLINEFORM3 , following INLINEFORM4 rule of normal distribution BIBREF11 .
Features for Learning Model
We use a wide range of features that suitably represent a paper-reference pair ( INLINEFORM0 ), indicating INLINEFORM1 refers to INLINEFORM2 through reference INLINEFORM3 . These features can be grouped into six general classes.
The “reference context” of INLINEFORM0 in INLINEFORM1 is defined by three-sentence window (sentence where INLINEFORM2 occurs and its immediate previous and next sentences). For multiple occurrences, we calculate its average score. We refer to “reference sentence” to indicate the sentence where INLINEFORM3 appears.
(i) CF:Alone. It indicates whether INLINEFORM0 is mentioned alone in the reference context or together with other references.
(ii) CF:First. When INLINEFORM0 is grouped with others, this feature indicates whether it is mentioned first (e.g., “[2]” is first in “[2,4,6]”).
Next four features are based on the occurrence of words in the corresponding lists created manually (see Table TABREF9 ) to understand different aspects.
(iii) CF:Relevant. It indicates whether INLINEFORM0 is explicitly mentioned as relevant in the reference context (Rel in Table TABREF9 ).
(iv) CF:Recent. It tells whether the reference context indicates that INLINEFORM0 is new (Rec in Table TABREF9 ).
(v) CF:Extreme. It implies that INLINEFORM0 is extreme in some way (Ext in Table TABREF9 ).
(vi) CF:Comp. It indicates whether the reference context makes some kind of comparison with INLINEFORM0 (Comp in Table TABREF9 ).
Note we do not consider any sentiment-based features as suggested by BIBREF6 .
It is natural that the high degree of semantic similarity between the contents of INLINEFORM0 and INLINEFORM1 indicates the influence of INLINEFORM2 in INLINEFORM3 . We assume that although the full text of INLINEFORM4 is given, we do not have access to the full text of INLINEFORM5 (may be due to the subscription charge or the unavailability of the older papers). Therefore, we consider only the title of INLINEFORM6 as a proxy of its full text. Then we calculate the cosine-similarity between the title (T) of INLINEFORM7 and (i) SF:TTitle. the title, (ii) SF:TAbs. the abstract, SF:TIntro. the introduction, (iv) SF:TConcl. the conclusion, and (v) SF:TRest. the rest of the sections (sections other than abstract, introduction and conclusion) of INLINEFORM8 .
We further assume that the “reference context” (RC) of INLINEFORM0 in INLINEFORM1 might provide an alternate way of summarizing the usage of the reference. Therefore, we take the same similarity based approach mentioned above, but replace the title of INLINEFORM2 with its RC and obtain five more features: (vi) SF:RCTitle, (vii) SF:RCAbs, (viii) SF:RCIntro, (ix) SF:RCConcl and (x) SF:RCRest. If a reference appears multiple times in a citing paper, we consider the aggregation of all INLINEFORM3 s together.
The underlying assumption of these features is that a reference which occurs more frequently in a citing paper is more influential than a single occurrence BIBREF8 . We count the frequency of INLINEFORM0 in (i) FF:Whole. the entire content, (ii) FF:Intro. the introduction, (iii) FF:Rel. the related work, (iv) FF:Rest. the rest of the sections (as mentioned in Section UID12 ) of INLINEFORM1 . We also introduce (v) FF:Sec. to measure the fraction of different sections of INLINEFORM2 where INLINEFORM3 occurs (assuming that appearance of INLINEFORM4 in different sections is more influential). These features are further normalized using the number of sentences in INLINEFORM5 in order to avoid unnecessary bias on the size of the paper.
Position of a reference in a paper might be a predictive clue to measure the influence BIBREF6 . Intuitively, the earlier the reference appears in the paper, the more important it seems to us. For the first two features, we divide the entire paper into two parts equally based on the sentence count and then see whether INLINEFORM0 appears (i) PF:Begin. in the beginning or (ii) PF:End. in the end of INLINEFORM1 . Importantly, if INLINEFORM2 appears multiple times in INLINEFORM3 , we consider the fraction of times it occurs in each part.
For the other two features, we take the entire paper, consider sentences as atomic units, and measure position of the sentences where INLINEFORM0 appears, including (iii) PF:Mean. mean position of appearance, (iv) PF:Std. standard deviation of different appearances. These features are normalized by the total length (number of sentences) of INLINEFORM1 . , thus ranging from 0 (indicating beginning of INLINEFORM2 ) to 1 (indicating the end of INLINEFORM3 ).
The linguistic evidences around the context of INLINEFORM0 sometimes provide clues to understand the intrinsic influence of INLINEFORM1 on INLINEFORM2 . Here we consider word level and structural features.
(i) LF:NGram. Different levels of INLINEFORM0 -grams (1-grams, 2-grams and 3-grams) are extracted from the reference context to see the effect of different word combination BIBREF13 .
(ii) LF:POS. Part-of-speech (POS) tags of the words in the reference sentence are used as features BIBREF14 .
(iii) LF:Tense. The main verb of the reference sentence is used as a feature BIBREF3 .
(iv) LF:Modal. The presence of modal verbs (e.g., “can”, “may”) often indicates the strength of the claims. Hence, we check the presence of the modal verbs in the reference sentence.
(v) LF:MainV. We use the main-verb of the reference sentence as a direct feature in the model.
(vi) LF:hasBut. We check the presence of conjunction “but”, which is another clue to show less confidence on the cited paper.
(vii) LF:DepRel. Following BIBREF13 we use all the dependencies present in the reference context, as given by the dependency parser BIBREF15 .
(viii) LF:POSP. BIBREF16 use seven regular expression patterns of POS tags to capture syntactic information; then seven boolean features mark the presence of these patterns. We also utilize the same regular expressions as shown below with the examples (the empty parenthesis in each example indicates the presence of a reference token INLINEFORM0 in the corresponding sentence; while few examples are complete sentences, few are not):
“.*\\(\\) VV[DPZN].*”: Chen () showed that cohesion is held in the vast majority of cases for English-French.
“.*(VHP|VHZ) VV.*”: while Cherry and Lin () have shown it to be a strong feature for word alignment...
“.*VH(D|G|N|P|Z) (RB )*VBN.*”: Inducing features for taggers by clustering has been tried by several researchers ().
“.*MD (RB )*VB(RB )* VVN.*”: For example, the likelihood of those generative procedures can be accumulated to get the likelihood of the phrase pair ().
“[∧ IW.]*VB(D|P|Z) (RB )*VV[ND].*”: Our experimental set-up is modeled after the human evaluation presented in ().
“(RB )*PP (RB )*V.*”: We use CRF () to perform this tagging.
“.*VVG (NP )*(CC )*(NP ).*”: Following (), we provide the annotators with only short sentences: those with source sentences between 10 and 25 tokens long.
These are all considered as Boolean features. For each feature, we take all the possible evidences from all paper-reference pairs and prepare a vector. Then for each pair, we check the presence (absence) of tokens for the corresponding feature and mark the vector accordingly (which in turn produces a set of Boolean features).
This group provides other factors to explain why is a paper being cited. (i) MS:GCount. To answer whether a highly-cited paper has more academic influence on the citing paper than the one which is less cited, we measure the number of other papers (except INLINEFORM0 ) citing INLINEFORM1 .
(ii) MS:SelfC. To see the effect of self-citation, we check whether at least one author is common in both INLINEFORM0 and INLINEFORM1 .
(iii) MG:Time. The fact that older papers are rarely cited, may not stipulate that these are less influential. Therefore, we measure the difference of the publication years of INLINEFORM0 and INLINEFORM1 .
(iv) MG:CoCite. It measures the co-citation counts of INLINEFORM0 and INLINEFORM1 defined by INLINEFORM2 , which in turn answers the significance of reference-based similarity driving the academic influence BIBREF18 .
Following BIBREF19 , we further make one step normalization and divide each feature by its maximum value in all the entires.
Dataset and Annotation
We use the AAN dataset BIBREF20 which is an assemblage of papers included in ACL related venues. The texts are preprocessed where sentences, paragraphs and sections are properly separated using different markers. The filtered dataset contains 12,843 papers (on average 6.21 references per paper) and 11,092 unique authors.
Next we use Parscit BIBREF21 to identify the reference contexts from the dataset and then extract the section headings from all the papers. Then each section heading is mapped into one of the following broad categories using the method proposed by BIBREF22 : Abstract, Introduction, Related Work, Conclusion and Rest.
Dataset Labeling. The hardest challenge in this task is that there is no publicly available dataset where references are annotated with the intensity value. Therefore, we constructed our own annotated dataset in two different ways. (i) Expert Annotation: we requested members of our research group to participate in this survey. To facilitate the labeling process, we designed a portal where all the papers present in our dataset are enlisted in a drop-down menu. Upon selecting a paper, its corresponding references were shown with five possible intensity values. The citing and cited papers are also linked to the original texts so that the annotators can read the original papers. A total of 20 researchers participated and they were asked to label as many paper-reference pairs as they could based on the definitions of the intensity provided in Section SECREF2 . The annotation process went on for one month. Out of total 1640 pairs annotated, 1270 pairs were taken such that each pair was annotated by at least two annotators, and the final intensity value of the pair was considered to be the average of the scores. The Pearson correlation and Kendell's INLINEFORM0 among the annotators are INLINEFORM1 and INLINEFORM2 respectively. (ii) Author Annotation: we believe that the authors of a paper are the best experts to judge the intensity of references present in the paper. With this intension, we launched a survey where we requested the authors whose papers are present in our dataset with significant numbers. We designed a web portal in similar fashion mentioned earlier; but each author was only shown her own papers in the drop-down menu. Out of 35 requests, 22 authors responded and total 196 pairs are annotated. This time we made sure that each paper-reference pair was annotated by only one author. The percentages of labels in the overall annotated dataset are as follows: 1: 9%, 2: 74%, 3: 9%, 4: 3%, 5: 4%.
Experimental Results
In this section, we start with analyzing the importance of the feature sets in predicting the reference intensity, followed by the detailed results.
Feature Analysis. In order to determine which features highly determine the gold-standard labeling, we measure the Pearson correlation between various features and the ground-truth labels. Figure FIGREF27 (a) shows the average correlation for each feature group, and in each group the rank of features based on the correlation is shown in Figure FIGREF27 (b). Frequency-based features (FF) turn out to be the best, among which FF:Rest is mostly correlated. This set of features is convenient and can be easily computed. Both CF and LF seem to be equally important. However, INLINEFORM0 tends to be less important in this task.
Results of Predictive Models. For the purpose of evaluation, we report the average results after 10-fold cross-validation. Here we consider five baselines to compare with GraLap: (i) Uniform: assign 3 to all the references assuming equal intensity, (ii) SVR+W: recently proposed Support Vector Regression (SVR) with the feature set mentioned in BIBREF4 , (iii) SVR+O: SVR model with our feature set, (iv) C4.5SSL: C4.5 semi-supervised algorithm with our feature set BIBREF23 , and (v) GLM: the traditional graph-based LP model with our feature set BIBREF9 . Three metrics are used to compare the results of the competing models with the annotated labels: Root Mean Square Error (RMSE), Pearson's correlation coefficient ( INLINEFORM0 ), and coefficient of determination ( INLINEFORM1 ).
Table TABREF28 shows the performance of the competing models. We incrementally include each feature set into GraLap greedily on the basis of ranking shown in Figure FIGREF27 (a). We observe that GraLap with only FF outperforms SVR+O with 41% improvement of INLINEFORM0 . As expected, the inclusion of PF into the model improves the model marginally. However, the overall performance of GraLap is significantly higher than any of the baselines ( INLINEFORM1 ).
Applications of Reference Intensity
In this section, we provide four different applications to show the use of measuring the intensity of references. To this end, we consider all the labeled entries for training and run GraLap to predict the intensity of rest of the paper-reference pairs.
Discovering Influential Articles
Influential papers in a particular area are often discovered by considering equal weights to all the citations of a paper. We anticipate that considering the reference intensity would perhaps return more meaningful results. To show this, Here we use the following measures individually to compute the influence of a paper: (i) RawCite: total number of citations per paper, (ii) RawPR: we construct a citation network (nodes: papers, links: citations), and measure PageRank BIBREF24 of each node INLINEFORM0 : INLINEFORM1 ; where, INLINEFORM2 , the damping factor, is set to 0.85, INLINEFORM3 is the total number of nodes, INLINEFORM4 is the set of nodes that have edges to INLINEFORM5 , and INLINEFORM6 is the set of nodes that INLINEFORM7 has an edge to, (iii) InfCite: the weighted version of RawCite, measured by the sum of intensities of all citations of a paper, (iv) InfPR: the weighted version of RawPR: INLINEFORM8 , where INLINEFORM9 indicates the influence of a reference. We rank all the articles based on these four measures separately. Table TABREF32 (a) shows the Spearman's rank correlation between pair-wise measures. As expected, (i) and (ii) have high correlation (same for (iii) and (iv)), whereas across two types of measures the correlation is less. Further, in order to know which measure is more relevant, we conduct a subjective study where we select top ten papers from each measure and invite the experts (not authors) who annotated the dataset, to make a binary decision whether a recommended paper is relevant. . The average pair-wise inter-annotator's agreement (based on Cohen's kappa BIBREF25 ) is INLINEFORM10 . Table TABREF32 (b) presents that out of 10 recommendations of InfPR, 7 (5) papers are marked as influential by majority (all) of the annotators, which is followed by InfCite. These results indeed show the utility of measuring reference intensity for discovering influential papers. Top three papers based on InfPR from the entire dataset are shown in Table TABREF33 .
Identifying Influential Authors
H-index, a measure of impact/influence of an author, considers each citation with equal weight BIBREF29 . Here we incorporate the notion of reference intensity into it and define hif-index.
Definition 2 An author INLINEFORM0 with a set of papers INLINEFORM1 has an hif-index equals to INLINEFORM2 , if INLINEFORM3 is the largest value such that INLINEFORM4 ; where INLINEFORM5 is the sum of intensities of all citations of INLINEFORM6 .
We consider 37 ACL fellows as the list of gold-standard influential authors. For comparative evaluation, we consider the total number of papers (TotP), total number of citations (TotC) and average citations per paper (AvgC) as three competing measures along with h-index and hif-index. We arrange all the authors in our dataset in decreasing order of each measure. Figure FIGREF36 (a) shows the Spearman's rank correlation among the common elements across pair-wise rankings. Figure FIGREF36 (b) shows the INLINEFORM0 for five competing measures at identifying ACL fellows. We observe that hif-index performs significantly well with an overall precision of INLINEFORM1 , followed by AvgC ( INLINEFORM2 ), h-index ( INLINEFORM3 ), TotC ( INLINEFORM4 ) and TotP ( INLINEFORM5 ). This result is an encouraging evidence that the reference-intensity could improve the identification of the influential authors. Top three authors based on hif-index are shown in Table TABREF33 .
Effect on Recommendation System
Here we show the effectiveness of reference-intensity by applying it to a real paper recommendation system. To this end, we consider FeRoSA BIBREF30 , a new (probably the first) framework of faceted recommendation for scientific articles, where given a query it provides facet-wise recommendations with each facet representing the purpose of recommendation BIBREF30 . The methodology is based on random walk with restarts (RWR) initiated from a query paper. The model is built on AAN dataset and considers both the citation links and the content information to produce the most relevant results. Instead of using the unweighted citation network, here we use the weighted network with each edge labeled by the intensity score. The final recommendation of FeRoSA is obtained by performing RWR with the transition probability proportional to the edge-weight (we call it Inf-FeRoSA). We observe that Inf-FeRoSA achieves an average precision of INLINEFORM0 at top 10 recommendations, which is 14% higher then FeRoSA while considering the flat version and 12.34% higher than FeRoSA while considering the faceted version.
Detecting Citation Stacking
Recently, Thomson Reuters began screening for journals that exchange large number of anomalous citations with other journals in a cartel-like arrangement, often known as “citation stacking” BIBREF31 , BIBREF32 . This sort of citation stacking is much more pernicious and difficult to detect. We anticipate that this behavior can be detected by the reference intensity. Since the AAN dataset does not have journal information, we use DBLP dataset BIBREF8 where the complete metadata information (along with reference contexts and abstract) is available, except the full content of the paper (559,338 papers and 681 journals; more details in BIBREF33 ). From this dataset, we extract all the features mentioned in Section SECREF10 except the ones that require full text, and run our model using the existing annotated dataset as training instances. We measure the traditional impact factor ( INLINEFORM0 ) of the journals and impact factor after considering the reference intensity ( INLINEFORM1 ). Figure FIGREF39 (a) shows that there are few journals whose INLINEFORM2 significantly deviates (3 INLINEFORM3 from the mean) from INLINEFORM4 ; out of the suspected journals 70% suffer from the effect of self-journal citations as well (shown in Figure FIGREF39 (b)), example including Expert Systems with Applications (current INLINEFORM5 of INLINEFORM6 ). One of the future work directions would be to predict such journals as early as possible after their first appearance.
Related Work
Although the citation count based metrics are widely accepted BIBREF5 , BIBREF34 , the belief that mere counting of citations is dubious has also been a subject of study BIBREF35 . BIBREF36 was the first who explained the reasons of citing a paper. BIBREF37 introduced a method for the rapid development of complex rule bases for classifying text segments. BIBREF16 focused on a less manual approach by learning domain-insensitive features from textual, physical, and syntactic aspects To address concerns about h-index, different alternative measures are proposed BIBREF38 . However they too could benefit from filtering or weighting references with a model of influence. Several research have been proposed to weight citations based on factors such as the prestige of the citing journal BIBREF39 , BIBREF40 , prestige of an author BIBREF41 , frequency of citations in citing papers BIBREF42 . Recently, BIBREF4 proposed a SVR based approach to measure the intensity of citations. Our methodology differs from this approach in at lease four significant ways: (i) they used six very shallow level features; whereas we consider features from different dimensions, (ii) they labeled the dataset by the help of independent annotators; here we additionally ask the authors of the citing papers to identify the influential references which is very realistic BIBREF43 ; (iii) they adopted SVR for labeling, which does not perform well for small training instances; here we propose GraLap , designed specifically for small training instances; (iv) four applications of reference intensity mentioned here are completely new and can trigger further to reassessing the existing bibliometrics.
Conclusion
We argued that the equal weight of all references might not be a good idea not only to gauge success of a research, but also to track follow-up work or recommending research papers. The annotated dataset would have tremendous potential to be utilized for other research. Moreover, GraLap can be used for any semi-supervised learning problem. Each application mentioned here needs separate attention. In future, we shall look into more linguistic evidences to improve our model. | (i) Uniform, (ii) SVR+W, (iii) SVR+O, (iv) C4.5SSL, (v) GLM |
3d583a0675ad34eb7a46767ef5eba5f0ea898aa9 | 3d583a0675ad34eb7a46767ef5eba5f0ea898aa9_0 | Q: What is the architecture of the model?
Text: Introduction
Code-switching has received a lot of attention from speech and computational linguistic communities especially on how to automatically recognize text from speech and understand the structure within it. This phenomenon is very common in bilingual and multilingual communities. For decades, linguists studied this phenomenon and found that speakers switch at certain points, not randomly and obeys several constraints which point to the code-switched position in an utterance BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . These hypotheses have been empirically proven by observing that bilinguals tend to code-switch intra-sententially at certain (morpho)-syntactic boundaries BIBREF5 . BIBREF1 defined the well-known theory that constraints the code-switch between a functional head and its complement is given the strong relationship between the two constituents, which corresponds to a hierarchical structure in terms of Part-of-Speech (POS) tags. BIBREF3 introduced Matrix-Language Model Framework for an intra-sentential case where the primary language is called Matrix Language and the second one called Embedded Language BIBREF2 . A language island was then introduced which is a constituent composed entirely of the language morphemes. From the Matrix-Language Frame Model, both matrix language (ML) island and embedded language (EL) islands are well-formed in their grammars and the EL islands are constrained under ML grammar BIBREF6 . BIBREF7 studied determiner–noun switches in Spanish–English bilinguals .
Code-switching can be classified into two categories: intra-sentential and inter-sentential switches BIBREF0 . Intra-sentential switch defines a shift from one language to another language within an utterance. Inter-sentential switch refers to the change between two languages in a single discourse, where the switching occurs after a sentence in the first language has been completed and the next sentence starts with a new language. The example of the intra-sentential switch is shown in (1), and the inter-sentential switch is shown in (2).
Language modeling using only word lexicons is not adequate to learn the complexity of code-switching patterns, especially in a low resource setting. Learning at the same time syntactic features such as POS tag and language identifier allows to have a shared grammatical information that constraint the next word prediction. Due to this reason, we propose a multi-task learning framework for code-switching language modeling task which is able to leverage syntactic features such as language and POS tag.
The main contribution of this paper is two-fold. First, multi-task learning model is proposed to jointly learn language modeling task and POS sequence tagging task on code-switched utterances. Second, we incorporate language information into POS tags to create bilingual tags - it distinguishes tags between Chinese and English. The POS tag features are shared towards the language model and enrich the features to better learn where to switch. From our experiments result, we found that our method improves the perplexity on SEAME Phase I and Phase II dataset BIBREF8 .
Related Work
The earliest language modeling research on code-switching data was applying linguistic theories on computational modelings such as Inversion Constraints and Functional Head Constraints on Chinese-English code-switching data BIBREF9 , BIBREF10 . BIBREF11 built a bilingual language model which is trained by interpolating two monolingual language models with statistical machine translation (SMT) based text generation to generate artificial code-switching text. BIBREF12 , BIBREF13 introduced a class-based method using RNNLM for computing the posterior probability and added POS tags in the input. BIBREF14 explored the combination of brown word clusters, open class words, and clusters of open class word embeddings as hand-crafted features for improving the factored language model. In addition, BIBREF15 proposed a generative language modeling with explicit phrase structure. A method of tying input and output embedding helped to reduce the number of parameters in language model and improved the perplexity BIBREF16 .
Learning multiple NLP tasks using multi-task learning have been recently used in many domains BIBREF17 , BIBREF18 , BIBREF19 . They presented a joint many-task model to handle multiple NLP tasks and share parameters with growing depth in a single end-to-end model. A work by BIBREF20 showed the potential of combining POS tagging with Named-Entity Recognition task.
Methodology
This section shows how to build the features and how to train our multi-task learning language model. Multi-task learning consists of two NLP tasks: Language modeling and POS sequence tagging.
Feature Representation
In the model, word lexicons and syntactic features are used as input.
Word Lexicons: Sentences are encoded as 1-hot vectors and our vocabulary is built from training data.
Syntactic Features: For each language island, phrase within the same language, we extract POS Tags iteratively using Chinese and English Penn Tree Bank Parser BIBREF21 , BIBREF22 . There are 31 English POS Tags and 34 Chinese POS Tags. Chinese words are distinguishable from English words since they have different encoding. We add language information in the POS tag label to discriminate POS tag between two languages.
Model Description
faFigure FIGREF7 illustrates our multi-task learning extension to recurrent language model. In this multi-task learning setting, the tasks are language modeling and POS tagging. The POS tagging task shares the POS tag vector and the hidden states to LM task, but it does not receive any information from the other loss. Let INLINEFORM0 be the word lexicon in the document and INLINEFORM1 be the POS tag of the corresponding INLINEFORM2 at index INLINEFORM3 . They are mapped into embedding matrices to get their INLINEFORM4 -dimensional vector representations INLINEFORM5 and INLINEFORM6 . The input embedding weights are tied with the output weights. We concatenate INLINEFORM7 and INLINEFORM8 as the input of INLINEFORM9 . The information from the POS tag sequence is shared to the language model through this step. INLINEFORM10 INLINEFORM11
where INLINEFORM0 denotes the concatenation operator, INLINEFORM1 and INLINEFORM2 are the final hidden states of INLINEFORM3 and INLINEFORM4 respectively. INLINEFORM5 and INLINEFORM6 , the hidden states from both LSTMs are summed before predicting the next word. INLINEFORM7 INLINEFORM8
The word distribution of the next word INLINEFORM0 is normalized using softmax function. The model uses cross-entropy losses as error functions INLINEFORM1 and INLINEFORM2 for language modeling task and POS tagging task respectively. We optimize the multi-objective losses using the Back Propagation algorithm and we perform a weighted linear sum of the losses for each individual task. INLINEFORM3
where INLINEFORM0 is the weight of the loss in the training.
Experimental Setup
In this section, we present the experimental setting for this task
Corpus: SEAME (South East Asia Mandarin-English), a conversational Mandarin-English code-switching speech corpus consists of spontaneously spoken interviews and conversations BIBREF8 . Our dataset (LDC2015S04) is the most updated version of the Linguistic Data Consortium (LDC) database. However, the statistics are not identical to BIBREF23 . The corpus consists of two phases. In Phase I, only selected audio segments were transcribed. In Phase II, most of the audio segments were transcribed. According to the authors, it was not possible to restore the original dataset. The authors only used Phase I corpus. Few speaker ids are not in the speaker list provided by the authors BIBREF23 . Therefore as a workaround, we added these ids to the train set. As our future reference, the recording lists are included in the supplementary material.
Preprocessing: First, we tokenized English and Chinese word using Stanford NLP toolkit BIBREF24 . Second, all hesitations and punctuations were removed except apostrophe, for examples: “let's" and “it's". Table TABREF9 and Table TABREF10 show the statistics of SEAME Phase I and II corpora. Table TABREF11 shows the most common trigger POS tag for Phase II corpus.
Training: The baseline model was trained using RNNLM BIBREF25 . Then, we trained our LSTM models with different hidden sizes [200, 500]. All LSTMs have 2 layers and unrolled for 35 steps. The embedding size is equal to the LSTM hidden size. A dropout regularization BIBREF26 was applied to the word embedding vector and POS tag embedding vector, and to the recurrent output BIBREF27 with values between [0.2, 0.4]. We used a batch size of 20 in the training. EOS tag was used to separate every sentence. We chose Stochastic Gradient Descent and started with a learning rate of 20 and if there was no improvement during the evaluation, we reduced the learning rate by a factor of 0.75. The gradient was clipped to a maximum of 0.25. For the multi-task learning, we used different loss weights hyper-parameters INLINEFORM0 in the range of [0.25, 0.5, 0.75]. We tuned our model with the development set and we evaluated our best model using the test set, taking perplexity as the final evaluation metric. Where the latter was calculated by taking the exponential of the error in the negative log-form. INLINEFORM1
Results
Table TABREF14 and Table TABREF15 show the results of multi-task learning with different values of the hyper-parameter INLINEFORM0 . We observe that the multi-task model with INLINEFORM1 achieved the best performance. We compare our multi-task learning model against RNNLM and LSTM baselines. The baselines correspond to recurrent neural networks that are trained with word lexicons. Table TABREF16 and Table TABREF17 present the overall results from different models. The multi-task model performs better than LSTM baseline by 9.7% perplexity in Phase I and 7.4% perplexity in Phase II. The performance of our model in Phase II is also better than the RNNLM (8.9%) and far better than the one presented in BIBREF13 in Phase I.
Moreover, the results show that adding shared POS tag representation to INLINEFORM0 does not hurt the performance of the language modeling task. This implies that the syntactic information helps the model to better predict the next word in the sequence. To further verify this hypothesis, we conduct two analysis by visualizing our prediction examples in Figure FIGREF13 :
Results with different hyper-parameter settings
Conclusion
In this paper, we propose a multi-task learning approach for code-switched language modeling. The multi-task learning models achieve the best performance and outperform LSTM baseline with 9.7% and 7.4% improvement in perplexity for Phase I and Phase II SEAME corpus respectively. This implies that by training two different NLP tasks together the model can correctly learn the correlation between them. Indeed, the syntactic information helps the model to be aware of code-switching points and it improves the performance over the language model. Finally, we conclude that multi-task learning has good potential on code-switching language modeling research and there are still rooms for improvements, especially by adding more language pairs and corpora.
Acknowledgments
This work is partially funded by ITS/319/16FP of the Innovation Technology Commission, HKUST 16214415 & 16248016 of Hong Kong Research Grants Council, and RDC 1718050-0 of EMOS.AI.
Recording Lists
We split the recording ids into train, development, and test set as the following: | LSTM |
d7d41a1b8bbb1baece89b28962d23ee4457b9c3a | d7d41a1b8bbb1baece89b28962d23ee4457b9c3a_0 | Q: What languages are explored in the work?
Text: Introduction
Code-switching has received a lot of attention from speech and computational linguistic communities especially on how to automatically recognize text from speech and understand the structure within it. This phenomenon is very common in bilingual and multilingual communities. For decades, linguists studied this phenomenon and found that speakers switch at certain points, not randomly and obeys several constraints which point to the code-switched position in an utterance BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . These hypotheses have been empirically proven by observing that bilinguals tend to code-switch intra-sententially at certain (morpho)-syntactic boundaries BIBREF5 . BIBREF1 defined the well-known theory that constraints the code-switch between a functional head and its complement is given the strong relationship between the two constituents, which corresponds to a hierarchical structure in terms of Part-of-Speech (POS) tags. BIBREF3 introduced Matrix-Language Model Framework for an intra-sentential case where the primary language is called Matrix Language and the second one called Embedded Language BIBREF2 . A language island was then introduced which is a constituent composed entirely of the language morphemes. From the Matrix-Language Frame Model, both matrix language (ML) island and embedded language (EL) islands are well-formed in their grammars and the EL islands are constrained under ML grammar BIBREF6 . BIBREF7 studied determiner–noun switches in Spanish–English bilinguals .
Code-switching can be classified into two categories: intra-sentential and inter-sentential switches BIBREF0 . Intra-sentential switch defines a shift from one language to another language within an utterance. Inter-sentential switch refers to the change between two languages in a single discourse, where the switching occurs after a sentence in the first language has been completed and the next sentence starts with a new language. The example of the intra-sentential switch is shown in (1), and the inter-sentential switch is shown in (2).
Language modeling using only word lexicons is not adequate to learn the complexity of code-switching patterns, especially in a low resource setting. Learning at the same time syntactic features such as POS tag and language identifier allows to have a shared grammatical information that constraint the next word prediction. Due to this reason, we propose a multi-task learning framework for code-switching language modeling task which is able to leverage syntactic features such as language and POS tag.
The main contribution of this paper is two-fold. First, multi-task learning model is proposed to jointly learn language modeling task and POS sequence tagging task on code-switched utterances. Second, we incorporate language information into POS tags to create bilingual tags - it distinguishes tags between Chinese and English. The POS tag features are shared towards the language model and enrich the features to better learn where to switch. From our experiments result, we found that our method improves the perplexity on SEAME Phase I and Phase II dataset BIBREF8 .
Related Work
The earliest language modeling research on code-switching data was applying linguistic theories on computational modelings such as Inversion Constraints and Functional Head Constraints on Chinese-English code-switching data BIBREF9 , BIBREF10 . BIBREF11 built a bilingual language model which is trained by interpolating two monolingual language models with statistical machine translation (SMT) based text generation to generate artificial code-switching text. BIBREF12 , BIBREF13 introduced a class-based method using RNNLM for computing the posterior probability and added POS tags in the input. BIBREF14 explored the combination of brown word clusters, open class words, and clusters of open class word embeddings as hand-crafted features for improving the factored language model. In addition, BIBREF15 proposed a generative language modeling with explicit phrase structure. A method of tying input and output embedding helped to reduce the number of parameters in language model and improved the perplexity BIBREF16 .
Learning multiple NLP tasks using multi-task learning have been recently used in many domains BIBREF17 , BIBREF18 , BIBREF19 . They presented a joint many-task model to handle multiple NLP tasks and share parameters with growing depth in a single end-to-end model. A work by BIBREF20 showed the potential of combining POS tagging with Named-Entity Recognition task.
Methodology
This section shows how to build the features and how to train our multi-task learning language model. Multi-task learning consists of two NLP tasks: Language modeling and POS sequence tagging.
Feature Representation
In the model, word lexicons and syntactic features are used as input.
Word Lexicons: Sentences are encoded as 1-hot vectors and our vocabulary is built from training data.
Syntactic Features: For each language island, phrase within the same language, we extract POS Tags iteratively using Chinese and English Penn Tree Bank Parser BIBREF21 , BIBREF22 . There are 31 English POS Tags and 34 Chinese POS Tags. Chinese words are distinguishable from English words since they have different encoding. We add language information in the POS tag label to discriminate POS tag between two languages.
Model Description
faFigure FIGREF7 illustrates our multi-task learning extension to recurrent language model. In this multi-task learning setting, the tasks are language modeling and POS tagging. The POS tagging task shares the POS tag vector and the hidden states to LM task, but it does not receive any information from the other loss. Let INLINEFORM0 be the word lexicon in the document and INLINEFORM1 be the POS tag of the corresponding INLINEFORM2 at index INLINEFORM3 . They are mapped into embedding matrices to get their INLINEFORM4 -dimensional vector representations INLINEFORM5 and INLINEFORM6 . The input embedding weights are tied with the output weights. We concatenate INLINEFORM7 and INLINEFORM8 as the input of INLINEFORM9 . The information from the POS tag sequence is shared to the language model through this step. INLINEFORM10 INLINEFORM11
where INLINEFORM0 denotes the concatenation operator, INLINEFORM1 and INLINEFORM2 are the final hidden states of INLINEFORM3 and INLINEFORM4 respectively. INLINEFORM5 and INLINEFORM6 , the hidden states from both LSTMs are summed before predicting the next word. INLINEFORM7 INLINEFORM8
The word distribution of the next word INLINEFORM0 is normalized using softmax function. The model uses cross-entropy losses as error functions INLINEFORM1 and INLINEFORM2 for language modeling task and POS tagging task respectively. We optimize the multi-objective losses using the Back Propagation algorithm and we perform a weighted linear sum of the losses for each individual task. INLINEFORM3
where INLINEFORM0 is the weight of the loss in the training.
Experimental Setup
In this section, we present the experimental setting for this task
Corpus: SEAME (South East Asia Mandarin-English), a conversational Mandarin-English code-switching speech corpus consists of spontaneously spoken interviews and conversations BIBREF8 . Our dataset (LDC2015S04) is the most updated version of the Linguistic Data Consortium (LDC) database. However, the statistics are not identical to BIBREF23 . The corpus consists of two phases. In Phase I, only selected audio segments were transcribed. In Phase II, most of the audio segments were transcribed. According to the authors, it was not possible to restore the original dataset. The authors only used Phase I corpus. Few speaker ids are not in the speaker list provided by the authors BIBREF23 . Therefore as a workaround, we added these ids to the train set. As our future reference, the recording lists are included in the supplementary material.
Preprocessing: First, we tokenized English and Chinese word using Stanford NLP toolkit BIBREF24 . Second, all hesitations and punctuations were removed except apostrophe, for examples: “let's" and “it's". Table TABREF9 and Table TABREF10 show the statistics of SEAME Phase I and II corpora. Table TABREF11 shows the most common trigger POS tag for Phase II corpus.
Training: The baseline model was trained using RNNLM BIBREF25 . Then, we trained our LSTM models with different hidden sizes [200, 500]. All LSTMs have 2 layers and unrolled for 35 steps. The embedding size is equal to the LSTM hidden size. A dropout regularization BIBREF26 was applied to the word embedding vector and POS tag embedding vector, and to the recurrent output BIBREF27 with values between [0.2, 0.4]. We used a batch size of 20 in the training. EOS tag was used to separate every sentence. We chose Stochastic Gradient Descent and started with a learning rate of 20 and if there was no improvement during the evaluation, we reduced the learning rate by a factor of 0.75. The gradient was clipped to a maximum of 0.25. For the multi-task learning, we used different loss weights hyper-parameters INLINEFORM0 in the range of [0.25, 0.5, 0.75]. We tuned our model with the development set and we evaluated our best model using the test set, taking perplexity as the final evaluation metric. Where the latter was calculated by taking the exponential of the error in the negative log-form. INLINEFORM1
Results
Table TABREF14 and Table TABREF15 show the results of multi-task learning with different values of the hyper-parameter INLINEFORM0 . We observe that the multi-task model with INLINEFORM1 achieved the best performance. We compare our multi-task learning model against RNNLM and LSTM baselines. The baselines correspond to recurrent neural networks that are trained with word lexicons. Table TABREF16 and Table TABREF17 present the overall results from different models. The multi-task model performs better than LSTM baseline by 9.7% perplexity in Phase I and 7.4% perplexity in Phase II. The performance of our model in Phase II is also better than the RNNLM (8.9%) and far better than the one presented in BIBREF13 in Phase I.
Moreover, the results show that adding shared POS tag representation to INLINEFORM0 does not hurt the performance of the language modeling task. This implies that the syntactic information helps the model to better predict the next word in the sequence. To further verify this hypothesis, we conduct two analysis by visualizing our prediction examples in Figure FIGREF13 :
Results with different hyper-parameter settings
Conclusion
In this paper, we propose a multi-task learning approach for code-switched language modeling. The multi-task learning models achieve the best performance and outperform LSTM baseline with 9.7% and 7.4% improvement in perplexity for Phase I and Phase II SEAME corpus respectively. This implies that by training two different NLP tasks together the model can correctly learn the correlation between them. Indeed, the syntactic information helps the model to be aware of code-switching points and it improves the performance over the language model. Finally, we conclude that multi-task learning has good potential on code-switching language modeling research and there are still rooms for improvements, especially by adding more language pairs and corpora.
Acknowledgments
This work is partially funded by ITS/319/16FP of the Innovation Technology Commission, HKUST 16214415 & 16248016 of Hong Kong Research Grants Council, and RDC 1718050-0 of EMOS.AI.
Recording Lists
We split the recording ids into train, development, and test set as the following: | Mandarin, English |
b458ebca72e3013da3b4064293a0a2b4b5ef1fa6 | b458ebca72e3013da3b4064293a0a2b4b5ef1fa6_0 | Q: What is the state-of-the-art neural coreference resolution model?
Text: Introduction
Natural language processing (NLP) with neural networks has grown in importance over the last few years. They provide state-of-the-art models for tasks like coreference resolution, language modeling, and machine translation BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . However, since these models are trained on human language texts, a natural question is whether they exhibit bias based on gender or other characteristics, and, if so, how should this bias be mitigated. This is the question that we address in this paper.
Prior work provides evidence of bias in autocomplete suggestions BIBREF5 and differences in accuracy of speech recognition based on gender and dialect BIBREF6 on popular online platforms. Word embeddings, initial pre-processors in many NLP tasks, embed words of a natural language into a vector space of limited dimension to use as their semantic representation. BIBREF7 and BIBREF8 observed that popular word embeddings including word2vec BIBREF9 exhibit gender bias mirroring stereotypical gender associations such as the eponymous BIBREF7 "Man is to computer programmer as Woman is to homemaker".
Yet the question of how to measure bias in a general way for neural NLP tasks has not been studied. Our first contribution is a general benchmark to quantify gender bias in a variety of neural NLP tasks. Our definition of bias loosely follows the idea of causal testing: matched pairs of individuals (instances) that differ in only a targeted concept (like gender) are evaluated by a model and the difference in outcomes (or scores) is interpreted as the causal influence of the concept in the scrutinized model. The definition is parametric in the scoring function and the target concept. Natural scoring functions exist for a number of neural natural language processing tasks.
We instantiate the definition for two important tasks—coreference resolution and language modeling. Coreference resolution is the task of finding words and expressions referring to the same entity in a natural language text. The goal of language modeling is to model the distribution of word sequences. For neural coreference resolution models, we measure the gender coreference score disparity between gender-neutral words and gendered words like the disparity between “doctor” and “he” relative to “doctor” and “she” pictured as edge weights in Figure FIGREF2 . For language models, we measure the disparities of emission log-likelihood of gender-neutral words conditioned on gendered sentence prefixes as is shown in Figure FIGREF2 . Our empirical evaluation with state-of-the-art neural coreference resolution and textbook RNN-based language models BIBREF2 , BIBREF1 , BIBREF10 trained on benchmark datasets finds gender bias in these models .
Next we turn our attention to mitigating the bias. BIBREF7 introduced a technique for debiasing word embeddings which has been shown to mitigate unwanted associations in analogy tasks while preserving the embedding's semantic properties. Given their widespread use, a natural question is whether this technique is sufficient to eliminate bias from downstream tasks like coreference resolution and language modeling. As our second contribution, we explore this question empirically. We find that while the technique does reduce bias, the residual bias is considerable. We further discover that debiasing models that make use of embeddings that are co-trained with their other parameters BIBREF1 , BIBREF10 exhibit a significant drop in accuracy.
Our third contribution is counterfactual data augmentation (CDA): a generic methodology to mitigate bias in neural NLP tasks. For each training instance, the method adds a copy with an intervention on its targeted words, replacing each with its partner, while maintaining the same, non-intervened, ground truth. The method results in a dataset of matched pairs with ground truth independent of the target distinction (see Figure FIGREF2 and Figure FIGREF2 for examples). This encourages learning algorithms to not pick up on the distinction.
Our empirical evaluation shows that CDA effectively decreases gender bias while preserving accuracy. We also explore the space of mitigation strategies with CDA, a prior approach to word embedding debiasing (WED), and their compositions. We show that CDA outperforms WED, drastically so when word embeddings are co-trained. For pre-trained embeddings, the two methods can be effectively composed. We also find that as training proceeds on the original data set with gradient descent the gender bias grows as the loss reduces, indicating that the optimization encourages bias; CDA mitigates this behavior.
In the body of this paper we present necessary background (Section SECREF2 ), our methods (Sections SECREF3 and SECREF4 ), their evaluation (Section SECREF5 ), and speculate on future research (Section SECREF6 ).
Background
In this section we briefly summarize requisite elements of neural coreference resolution and language modeling systems: scoring layers and loss evaluation, performance measures, and the use of word embeddings and their debiasing. The tasks and models we experiment with later in this paper and their properties are summarized in Table TABREF6 .
Measuring Bias
Our definition of bias loosely follows the idea of causal testing: matched pairs of individuals (instances) that differ in only a targeted concept (like gender) are evaluated by a model and the difference in outcomes is interpreted as the causal influence of the concept in the scrutinized model.
As an example, we can choose a test corpus of simple sentences relating the word “professor” to the male pronoun “he” as in sentence INLINEFORM0 of Figure FIGREF2 along with the matched pair INLINEFORM1 that swaps in “she” in place of “he”. With each element of the matched pair, we also indicate which mentions in each sentence, or context, should attain the same score. In this case, the complete matched pair is INLINEFORM2 and INLINEFORM3 . We measure the difference in scores assigned to the coreference of the pronoun with the occupation across the matched pair of sentences.
We begin with the general definition and instantiate it for measuring gender bias in relation to occupations for both coreference resolution and language modeling.
Definition 1 (Score Bias)
Given a set of matched pairs INLINEFORM0 (or class of sets INLINEFORM1 ) and a scoring function INLINEFORM2 , the bias of INLINEFORM3 under the concept(s) tested by INLINEFORM4 (or INLINEFORM5 ), written INLINEFORM6 (or INLINEFORM7 ) is the expected difference in scores assigned to the matched pairs (or expected absolute bias across class members): INLINEFORM8
Occupation-Gender Bias
The principle concept we address in this paper is gender, and the biases we will focus on in the evaluation relate gender to gender-neutral occupations. To define the matched pairs to test this type of bias we employ interventions: transformations of instances to their matches. Interventions are a more convenient way to reason about the concepts being tested under a set of matched pairs.
Definition 2 (Intervention Matches) Given an instance INLINEFORM0 , corpus INLINEFORM1 , or class INLINEFORM2 , and an intervention INLINEFORM3 , the intervention matching under INLINEFORM4 is the matched pair INLINEFORM5 or the set of matched pairs INLINEFORM6 , respectively, and is defined as follows. INLINEFORM7
The core intervention used throughout this paper is the naive intervention INLINEFORM0 that swaps every gendered word in its inputs with the corresponding word of the opposite gender. The complete list of swapped words can be found in Supplemental Materials. In Section SECREF4 we define more nuanced forms of intervention for the purpose of debiasing systems.
We construct a set of sentences based on a collection of templates. In the case of coreference resolution, each sentence, or context, includes a placeholder for an occupation word and the male gendered pronoun “he” while the mentions to score are the occupation and the pronoun. An example of such a template is the sentence “The [OCCUPATION] ran because he is late.” where the underline words indicate the mentions for scoring. The complete list can be found in the Supplemental Materials.
Definition 3 (Occupation Bias) Given the list of templates INLINEFORM0 , we construct the matched pair set for computing gender-occupation bias of score function INLINEFORM1 for an occupation INLINEFORM2 by instantiating all of the templates with INLINEFORM3 and producing a matched pair via the naive intervention INLINEFORM4 : INLINEFORM5
To measure the aggregate occupation bias over all occupations INLINEFORM0 we compute bias on the class INLINEFORM1 where INLINEFORM2 .
The bias measures are then simply:
INLINEFORM0
For language modeling the template set differs. There we assume the scoring function is the one that assigns a likelihood of a given word being the next word in some initial sentence fragment. We place the pronoun in the initial fragment thereby making sure the score is conditioned on the presence of the male or female pronoun. We are thus able to control for the frequency disparities between the pronouns in a corpus, focusing on disparities with occupations and not disparities in general occurrence. An example of a test template for language modeling is the fragment “He is a | [OCCUPATION]” where the pipe delineates the sentence prefix from the test word. The rest can be seen in the Supplemental Materials.
Counterfactual Data Augmentation (CDA)
In the previous section we have shown how to quantify gender bias in coreference resolution systems and language models using a naive intervention, or INLINEFORM0 . The disparities at the core of the bias definitions can be thought of as unwanted effects: the gender of the pronouns like he or she has influence on its coreference strength with an occupation word or the probability of emitting an occupation word though ideally it should not. Following the tradition of causal testing, we make use of matched pairs constructed via interventions to augment existing training datasets. By defining the interventions so as to express a particular concept such as gender, we produce datasets that encourage training algorithms to not capture that concept.
Definition 4 (Counterfactual Data Augmentation) Given an intervention INLINEFORM0 , the dataset INLINEFORM1 of input instances INLINEFORM2 can be INLINEFORM3 c INLINEFORM4 , or INLINEFORM5 , to produce the dataset INLINEFORM6 .
Note that the intervention above does not affect the ground truth. This highlights the core feature of the method: an unbiased model should not distinguish between matched pairs, that is, it should produce the same outcome. The intervention is another critical feature as it needs to represent a concept crisply, that is, it needs to produce matched pairs that differ only (or close to it) in the expression of that concept. The simplest augmentation we experiment on is the naive intervention INLINEFORM0 , which captures the distinction between genders on gendered words. The more nuanced intervention we discuss further in this paper relaxes this distinction in the presence of some grammatical structures.
Given the use of INLINEFORM0 in the definition of bias in Section SECREF3 , it would be expected that debiasing via naive augmentation completely neutralizes gender bias. However, bias is not the only concern in a coreference resolution or language modeling systems; its performance is usually the primary goal. As we evaluate performance on the original corpora, the alterations necessarily reduce performance.
To ensure the predictive power of models trained from augmented data, the generated sentences need to remain semantically and grammatically sound. We assume that if counterfactual sentences are generated properly, the ground truth coreference clustering labels should stay the same for the coreference resolution systems. Since language modeling is an unsupervised task, we do not need to assign labels for the counterfactual sentences.
To define our gender intervention, we employ a bidirectional dictionary of gendered word pairs such as he:she, her:him/his and other definitionally gendered words such as actor:actress, queen:king. The complete list of gendered pairs can be found in the Supplemental Materials. We replace every occurrence (save for the exceptions noted below) of a gendered word in the original corpus with its dual as is the case with INLINEFORM0 .
Flipping a gendered word when it refers to a proper noun such as Queen Elizabeth would result in semantically incorrect sentences. As a result, we do not flip gendered words if they are in a cluster with a proper noun. For coreference resolution, the clustering information is provided by labels in the coreference resolution dataset. Part-of-speech information, which indicates whether a word is a pronoun, is obtained through metadata within the training data.
A final caveat for generating counterfactuals is the appropriate handing of her, he and him. Both he and him would be flipped to her, while her should be flipped to him if it is an objective pronoun and to his if it is a possessive pronoun. This information is also obtained from part-of-speech tags.
The adjustments to the naive intervention for maintaining semantic or grammatical structures, produce the grammatical intervention, or INLINEFORM0 .
Evaluation
In this section we evaluate CDA debiasing across three models from two NLP tasks in comparison/combination with the word embedding debiasing of BIBREF7 . For each configuration of methods we report aggregated occupation bias (marked AOB) (Definition SECREF14 ) and the resulting performance measured on original test sets (without augmentation). Most of the experimentation that follow employs grammatical augmentation though we investigate the naive intervention in Section SECREF25 .
Neural Coreference Resolution
We use the English coreference resolution dataset from the CoNLL-2012 shared task BIBREF15 , the benchmark dataset for the training and evaluation of coreference resolution. The training dataset contains 2408 documents with 1.3 million words. We use two state-of-art neural coreference resolution models described by BIBREF2 and BIBREF1 . We report the average F1 value of standard MUC, B INLINEFORM0 and CEAF INLINEFORM1 metrics for the original test set.
The model of BIBREF2 uses pretrained word embeddings, thus all features and mention representations are learned from these pretrained embeddings. As a result we can only apply debiasing of BIBREF7 to the pretrained embedding. We evaluate bias on four configurations: no debiasing, debiased embeddings (written INLINEFORM0 ), CDA only, and CDA with INLINEFORM1 . The configurations and resulting aggregate bias measures are shown in Table TABREF20 .
In the aggregate measure, we see that the original model is biased (recall the scale of coreference scores shown in Figure FIGREF2 ). Further, each of the debiasing methods reduces bias to some extent, with the largest reduction when both methods are applied. Impact on performance is negligible in all cases.
Figure FIGREF19 shows the per-occupation bias in Models 1.1 and 1.2. It aligns with the historical gender stereotypes: female-dominant occupations such as nurse, therapist and flight attendant have strong negative bias while male-dominant occupations such as banker, engineer and scientist have strong positive bias. This behaviour is reduced with the application of CDA. | BIBREF2 , BIBREF1 |
1cbca15405632a2e9d0a7061855642d661e3b3a7 | 1cbca15405632a2e9d0a7061855642d661e3b3a7_0 | Q: How much improvement do they get?
Text: Introduction
Satirical news, which uses parody characterized in a conventional news style, has now become an entertainment on social media. While news satire is claimed to be pure comedic and of amusement, it makes statements on real events often with the aim of attaining social criticism and influencing change BIBREF0. Satirical news can also be misleading to readers, even though it is not designed for falsifications. Given such sophistication, satirical news detection is a necessary yet challenging natural language processing (NLP) task. Many feature based fake or satirical news detection systems BIBREF1, BIBREF2, BIBREF3 extract features from word relations given by statistics or lexical database, and other linguistic features. In addition, with the great success of deep learning in NLP in recent years, many end-to-end neural nets based detection systems BIBREF4, BIBREF5, BIBREF6 have been proposed and delivered promising results on satirical news article detection.
However, with the evolution of fast-paced social media, satirical news has been condensed into a satirical-news-in-one-sentence form. For example, one single tweet of “If earth continues to warm at current rate moon will be mostly underwater by 2400" by The Onion is largely consumed and spread by social media users than the corresponding full article posted on The Onion website. Existing detection systems trained on full document data might not be applicable to such form of satirical news. Therefore, we collect news tweets from satirical news sources such as The Onion, The New Yorker (Borowitz Report) and legitimate news sources such as Wall Street Journal and CNN Breaking News. We explore the syntactic tree of the sentence and extract inconsistencies between attributes and head noun in noun phrases. We also detect the existence of named entities and relations between named entities and noun phrases as well as contradictions between the main clause and corresponding prepositional phrase. For a satirical news, such inconsistencies often exist since satirical news usually combines irrelevant components so as to attain surprise and humor. The discrepancies are measured by cosine similarity between word components where words are represented by Glove BIBREF7. Sentence structures are derived by Flair, a state-of-the-art NLP framework, which better captures part-of-speech and named entity structures BIBREF8.
Due to the obscurity of satire genre and lacks of information given tweet form satirical news, there exists ambiguity in satirical news, which causes great difficulty to make a traditional binary decision. That is, it is difficult to classify one news as satirical or legitimate with available information. Three-way decisions, proposed by YY Yao, added an option - deferral decision in the traditional yes-and-no binary decisions and can be used to classify satirical news BIBREF9, BIBREF10. That is, one news may be classified as satirical, legitimate, and deferral. We apply rough sets model, particularly the game-theoretic rough sets to classify news into three groups, i.e., satirical, legitimate, and deferral. Game-theoretic rough set (GTRS) model, proposed by JT Yao and Herbert, is a recent promising model for decision making in the rough set context BIBREF11. GTRS determine three decision regions from a tradeoff perspective when multiple criteria are involved to evaluate the classification models BIBREF12. Games are formulated to obtain a tradeoff between involved criteria. The balanced thresholds of three decision regions can be induced from the game equilibria. GTRS have been applied in recommendation systems BIBREF13, medical decision making BIBREF14, uncertainty analysis BIBREF15, and spam filtering BIBREF16.
We apply GTRS model on our preprocessed dataset and divide all news into satirical, legitimate, or deferral regions. The probabilistic thresholds that determine three decision regions are obtained by formulating competitive games between accuracy and coverage and then finding Nash equilibrium of games. We perform extensive experiments on the collected dataset, fine-tuning the model by different discretization methods and variation of equivalent classes. The experimental result shows that the performance of the proposed model is superior compared with Pawlak rough sets model and SVM.
Related Work
Satirical news detection is an important yet challenging NLP task. Many feature based models have been proposed. Burfoot et al. extracted features of headline, profanity, and slang using word relations given by statistical metrics and lexical database BIBREF1. Rubin et al. proposed a SVM based model with five features (absurdity, humor, grammar, negative affect, and punctuation) for fake news document detection BIBREF2. Yang et al. presented linguistic features such as psycholinguistic feature based on dictionary and writing stylistic feature from part-of-speech tags distribution frequency BIBREF17. Shu et al. gave a survey in which a set of feature extraction methods is introduced for fake news on social media BIBREF3. Conroy et al. also uses social network behavior to detect fake news BIBREF18. For satirical sentence classification, Davidov et al. extract patterns using word frequency and punctuation features for tweet sentences and amazon comments BIBREF19. The detection of a certain type of sarcasm which contracts positive sentiment with a negative situation by analyzing the sentence pattern with a bootstrapped learning was also discussed BIBREF20. Although word level statistical features are widely used, with advanced word representations and state-of-the-art part-of-speech tagging and named entity recognition model, we observe that semantic features are more important than word level statistical features to model performance. Thus, we decompose the syntactic tree and use word vectors to more precisely capture the semantic inconsistencies in different structural parts of a satirical news tweet.
Recently, with the success of deep learning in NLP, many researchers attempted to detect fake news with end-to-end neural nets based approaches. Ruchansky et al. proposed a hybrid deep neural model which processes both text and user information BIBREF5, while Wang et al. proposed a neural network model that takes both text and image data BIBREF6 for detection. Sarkar et al. presented a neural network with attention to both capture sentence level and document level satire BIBREF4. Some research analyzed sarcasm from non-news text. Ghosh and Veale BIBREF21 used both the linguistic context and the psychological context information with a bi-directional LSTM to detect sarcasm in users' tweets. They also published a feedback-based dataset by collecting the responses from the tweets authors for future analysis. While all these works detect fake news given full text or image content, or target on non-news tweets, we attempt bridge the gap and detect satirical news by analyzing news tweets which concisely summarize the content of news.
Methodology
In this section, we will describe the composition and preprocessing of our dataset and introduce our model in detail. We create our dataset by collecting legitimate and satirical news tweets from different news source accounts. Our model aims to detect whether the content of a news tweet is satirical or legitimate. We first extract the semantic features based on inconsistencies in different structural parts of the tweet sentences, and then use these features to train game-theoretic rough set decision model.
Methodology ::: Dataset
We collected approximately 9,000 news tweets from satirical news sources such as The Onion and Borowitz Report and about 11,000 news tweets from legitimate new sources such as Wall Street Journal and CNN Breaking News over the past three years. Each tweet is a concise summary of a news article. The duplicated and extreme short tweets are removed.A news tweet is labeled as satirical if it is written by satirical news sources and legitimate if it is from legitimate news sources. Table TABREF2 gives an example of tweet instances that comprise our dataset.
Methodology ::: Semantic Feature Extraction
Satirical news is not based on or does not aim to state the fact. Rather, it uses parody or humor to make statement, criticisms, or just amusements. In order to achieve such effect, contradictions are greatly utilized. Therefore, inconsistencies significantly exist in different parts of a satirical news tweet. In addition, there is a lack of entity or inconsistency between entities in news satire. We extracted these features at semantic level from different sub-structures of the news tweet. Different structural parts of the sentence are derived by part-of-speech tagging and named entity recognition by Flair. The inconsistencies in different structures are measured by cosine similarity of word phrases where words are represented by Glove word vectors. We explored three different aspects of inconsistency and designed metrics for their measurements. A word level feature using tf-idf BIBREF22 is added for robustness.
Methodology ::: Semantic Feature Extraction ::: Inconsistency in Noun Phrase Structures
One way for a news satire to obtain surprise or humor effect is to combine irrelevant or less jointly used attributes and the head noun which they modified. For example, noun phrase such as “rampant accountability", “posthumous apology", “Vatican basement", “self-imposed mental construct" and other rare combinations are widely used in satirical news, while individual words themselves are common. To measure such inconsistency, we first select all leaf noun phrases (NP) extracted from the semantic trees to avoid repeated calculation. Then for each noun phrase, each adjacent word pair is selected and represented by 100-dim Glove word vector denoted as $(v_{t},w_{t})$. We define the averaged cosine similarity of noun phrase word pairs as:
where $T$ is a total number of word pairs. We use $S_{N\!P}$ as a feature to capture the overall inconsistency in noun phrase uses. $S_{N\!P}$ ranges from -1 to 1, where a smaller value indicates more significant inconsistency.
Methodology ::: Semantic Feature Extraction ::: Inconsistency Between Clauses
Another commonly used rhetoric approach for news satire is to make contradiction between the main clause and its prepositional phrase or relative clause. For instance, in the tweet “Trump boys counter Chinese currency manipulation $by$ adding extra zeros To $20 Bills.", contradiction or surprise is gained by contrasting irrelevant statements provided by different parts of the sentence. Let $q$ and $p$ denote two clauses separated by main/relative relation or preposition, and $(w_{1},w_{1},... w_{q})$ and $(v_{1},v_{1},... v_{p})$ be the vectorized words in $q$ and $p$. Then we define inconsistency between $q$ and $p$ as:
Similarly, the feature $S_{Q\!P}$ is measured by cosine similarity of linear summations of word vectors, where smaller value indicates more significant inconsistency.
Methodology ::: Semantic Feature Extraction ::: Inconsistency Between Named Entities and Noun Phrases
Even though many satirical news tweets are made based on real persons or events, most of them lack specific entities. Rather, because the news is fabricated, news writers use the words such as “man",“woman",“local man", “area woman",“local family" as subject. However, the inconsistency between named entities and noun phrases often exists in a news satire if a named entity is included. For example, the named entity “Andrew Yang" and the noun phrases “time vortex" show great inconsistency than “President Trump", "Senate Republicans", and “White House" do in the legitimate news “President Trump invites Senate Republicans to the White House to talk about the funding bill." We define such inconsistency as a categorical feature that:
$S_{N\! E\! R\! N}$ is the cosine similarity of named entities and noun phrases of a certain sentence and $\bar{S}_{N\! E\! R\! N}$ is the mean value of $S_{N\! E\! R\! N}$ in corpus.
Methodology ::: Semantic Feature Extraction ::: Word Level Feature Using TF-IDF
We calculated the difference of tf-idf scores between legitimate news corpus and satirical news corpus for each single word. Then, the set $S_{voc}$ that includes most representative legitimate news words is created by selecting top 100 words given the tf-idf difference. For a news tweet and any word $w$ in the tweet, we define the binary feature $B_{voc}$ as:
Methodology ::: GTRS Decision Model
We construct a Game-theoretic Rough Sets model for classification given the extracted features. Suppose $E\subseteq U \times U$ is an equivalence relation on a finite nonempty universe of objects $U$, where $E$ is reflexive, symmetric, and transitive. The equivalence class containing an object $x$ is given by $[x]=\lbrace y\in U|xEy\rbrace $. The objects in one equivalence class all have the same attribute values. In the satirical news context, given an undefined concept $satire$, probabilistic rough sets divide all news into three pairwise disjoint groups i.e., the satirical group $POS(satire)$, legitimate group $NEG(satire)$, and deferral group $BND(satire)$, by using the conditional probability $Pr(satire|[x]) = \frac{|satire\cap [x]|}{|[x]|}$ as the evaluation function, and $(\alpha ,\beta )$ as the acceptance and rejection thresholds BIBREF23, BIBREF9, BIBREF10, that is,
Given an equivalence class $[x]$, if the conditional probability $Pr(satire|[x])$ is greater than or equal to the specified acceptance threshold $\alpha $, i.e., $Pr(satire|[x])\ge \alpha $, we accept the news in $[x]$ as $satirical$. If $Pr(satire|[x])$ is less than or equal to the specified rejection threshold $\beta $, i.e., $Pr(satire|[x])\le \beta $ we reject the news in $[x]$ as $satirical$, or we accept the news in $[x]$ as $legitimate$. If $Pr(satire|[x])$ is between $\alpha $ and $\beta $, i.e., $\beta <Pr(satire|[x])<\alpha $, we defer to make decisions on the news in $[x]$. Pawlak rough sets can be viewed as a special case of probabilistic rough sets with $(\alpha ,\beta )=(1,0)$.
Given a pair of probabilistic thresholds $(\alpha , \beta )$, we can obtain a news classifier according to Equation (DISPLAY_FORM13). The three regions are a partition of the universe $U$,
Then, the accuracy and coverage rate to evaluate the performance of the derived classifier are defined as follows BIBREF12,
The criterion coverage indicates the proportions of news that can be confidently classified. Next, we will obtain $(\alpha , \beta )$ by game formulation and repetition learning.
Methodology ::: GTRS Decision Model ::: Game Formulation
We construct a game $G=\lbrace O,S,u\rbrace $ given the set of game players $O$, the set of strategy profile $S$, and the payoff functions $u$, where the accuracy and coverage are two players, respectively, i.e., $O=\lbrace acc, cov\rbrace $.
The set of strategy profiles $S=S_{acc}\times S_{cov}$, where $S_{acc}$ and $S_{cov} $ are sets of possible strategies or actions performed by players $acc$ and $cov$. The initial thresholds are set as $(1,0)$. All these strategies are the changes made on the initial thresholds,
$c_{acc}$ and $c_{cov}$ denote the change steps used by two players, and their values are determined by the concrete experiment date set.
Payoff functions. The payoffs of players are $u=(u_{acc},u_{cov})$, and $u_{acc}$ and $u_{cov}$ denote the payoff functions of players $acc$ and $cov$, respectively. Given a strategy profile $p=(s, t)$ with player $acc$ performing $s$ and player $cov$ performing $t$, the payoffs of $acc$ and $cov$ are $u_{acc}(s, t)$ and $u_{cov}(s, t)$. We use $u_{acc}(\alpha ,\beta )$ and $u_{cov}(\alpha ,\beta )$ to show this relationship. The payoff functions $u_{acc}(\alpha ,\beta )$ and $u_{cov}(\alpha ,\beta )$ are defined as,
where $Acc_{(\alpha , \beta )}(Satire)$ and $Cov_{(\alpha , \beta )}(Satire)$ are the accuracy and coverage defined in Equations (DISPLAY_FORM15) and (DISPLAY_FORM16).
Payoff table. We use payoff tables to represent the formulated game. Table TABREF20 shows a payoff table example in which both players have 3 strategies defined in Equation refeq:stategies.
The arrow $\downarrow $ denotes decreasing a value and $\uparrow $ denotes increasing a value. On each cell, the threshold values are determined by two players.
Methodology ::: GTRS Decision Model ::: Repetition Learning Mechanism
We repeat the game with the new thresholds until a balanced solution is reached. We first analyzes the pure strategy equilibrium of the game and then check if the stopping criteria are satisfied.
Game equilibrium. The game solution of pure strategy Nash equilibrium is used to determine possible game outcomes in GTRS. The strategy profile $(s_{i},t_{j})$ is a pure strategy Nash equilibrium, if
This means that none of players would like to change his strategy or they would loss benefit if deriving from this strategy profile, provided this player has the knowledge of other player's strategy.
Repetition of games. Assuming that we formulate a game, in which the initial thresholds are $(\alpha , \beta )$, and the equilibrium analysis shows that the thresholds corresponding to the equilibrium are $(\alpha ^{*}, \beta ^{*})$. If the thresholds $(\alpha ^{*}, \beta ^{*})$ do not satisfy the stopping criterion, we will update the initial thresholds in the subsequent games. The initial thresholds of the new game will be set as $(\alpha ^{*}, \beta ^{*})$. If the thresholds $(\alpha ^{*}, \beta ^{*})$ satisfy the stopping criterion, we may stop the repetition of games.
Stopping criterion. We define the stopping criteria so that the iterations of games can stop at a proper time. In this research, we set the stopping criterion as within the range of thresholds, the increase of one player's payoff is less than the decrease of the other player's payoff.
Experiments
There are 8757 news records in our preprocessed data set. We use Jenks natural breaks BIBREF24 to discretize continuous variables $S_{N\!P}$ and $S_{Q\!P}$ both into five categories denoted by nominal values from 0 to 4, where larger values still fall into bins with larger nominal value. Let $D_{N\!P}$ and $D_{Q\!P}$ denote the discretized variables $S_{N\!P}$ and $S_{Q\!P}$, respectively. We derived the information table that only contains discrete features from our original dataset. A fraction of the information table is shown in Table TABREF23.
The news whose condition attributes have the same values are classified in an equivalence class $X_i$. We derived 149 equivalence classes and calculated the corresponding probability $Pr(X_i)$ and condition probability $Pr(Satire|X_i)$ for each $X_i$. The probability $Pr(X_{i})$ denotes the ratio of the number of news contained in the equivalence class $X_i$ to the total number of news in the dataset, while the conditional probability $Pr(Satire|X_{i})$ is the proportion of news in $X_i$ that are satirical. We combine the equivalence classes with the same conditional probability and reduce the number of equivalence classes to 108. Table TABREF24 shows a part of the probabilistic data information about the concept satire.
Experiments ::: Finding Thresholds with GTRS
We formulated a competitive game between the criteria accuracy and coverage to obtain the balanced probabilistic thresholds with the initial thresholds $(\alpha , \beta )=(1,0)$ and learning rate 0.03. As shown in the payoff table Table TABREF26,
the cell at the right bottom corner is the game equilibrium whose strategy profile is ($\beta $ increases 0.06, $\alpha $ decreases 0.06). The payoffs of the players are (0.9784,0.3343). We set the stopping criterion as the increase of one player's payoff is less than the decrease of the other player's payoff when the thresholds are within the range. When the thresholds change from (1,0) to (0.94, 0.06), the accuracy is decreased from 1 to 0.9784 but the coverage is increased from 0.0795 to 0.3343. We repeat the game by setting $(0.94, 0.06)$ as the next initial thresholds.
The competitive games are repeated seven times. The result is shown in Table TABREF27.
After the eighth iteration, the repetition of game is stopped because the further changes on thresholds may cause the thresholds lay outside of the range $0 < \beta < \alpha <1$, and the final result is the equilibrium of the seventh game $(\alpha , \beta )=(0.52, 0.48)$.
Experiments ::: Results
We compare Pawlak rough sets, SVM, and our GTRS approach on the proposed dataset. Table TABREF29 shows the results on the experimental data.
The SVM classifier achieved an accuracy of $78\%$ with a $100\%$ coverage. The Pawlak rough set model using $(\alpha , \beta )=(1,0)$ achieves a $100\%$ accuracy and a coverage ratio of $7.95\%$, which means it can only classify $7.95\%$ of the data. The classifier constructed by GTRS with $(\alpha , \beta )=(0.52, 0.48)$ reached an accuracy $82.71\%$ and a coverage $97.49\%$. which indicates that $97.49\%$ of data are able to be classified with accuracy of $82.71\%$. The remaining $2.51\%$ of data can not be classified without providing more information. To make our method comparable to other baselines such as SVM, we assume random guessing is made on the deferral region and present the modified accuracy. The modified accuracy for our approach is then $0.8271\times 0.9749 + 0.5 \times 0.0251 =81.89\%$. Our methods shows significant improvement as compared to Pawlak model and SVM.
Conclusion
In this paper, we propose a satirical news detection approach based on extracted semantic features and game-theoretic rough sets. In our mode, the semantic features extraction captures the inconsistency in the different structural parts of the sentences and the GTRS classifier can process the incomplete information based on repetitive learning and the acceptance and rejection thresholds. The experimental results on our created satirical and legitimate news tweets dataset show that our model significantly outperforms Pawlak rough set model and SVM. In particular, we demonstrate our model's ability to interpret satirical news detection from a semantic and information trade-off perspective. Other interesting extensions of our paper may be to use rough set models to extract the linguistic features at document level. | Their GTRS approach got an improvement of 3.89% compared to SVM and 27.91% compared to Pawlak. |
018ef092ffc356a2c0e970ae64ad3c2cf8443288 | 018ef092ffc356a2c0e970ae64ad3c2cf8443288_0 | Q: How large is the dataset?
Text: Introduction
Satirical news, which uses parody characterized in a conventional news style, has now become an entertainment on social media. While news satire is claimed to be pure comedic and of amusement, it makes statements on real events often with the aim of attaining social criticism and influencing change BIBREF0. Satirical news can also be misleading to readers, even though it is not designed for falsifications. Given such sophistication, satirical news detection is a necessary yet challenging natural language processing (NLP) task. Many feature based fake or satirical news detection systems BIBREF1, BIBREF2, BIBREF3 extract features from word relations given by statistics or lexical database, and other linguistic features. In addition, with the great success of deep learning in NLP in recent years, many end-to-end neural nets based detection systems BIBREF4, BIBREF5, BIBREF6 have been proposed and delivered promising results on satirical news article detection.
However, with the evolution of fast-paced social media, satirical news has been condensed into a satirical-news-in-one-sentence form. For example, one single tweet of “If earth continues to warm at current rate moon will be mostly underwater by 2400" by The Onion is largely consumed and spread by social media users than the corresponding full article posted on The Onion website. Existing detection systems trained on full document data might not be applicable to such form of satirical news. Therefore, we collect news tweets from satirical news sources such as The Onion, The New Yorker (Borowitz Report) and legitimate news sources such as Wall Street Journal and CNN Breaking News. We explore the syntactic tree of the sentence and extract inconsistencies between attributes and head noun in noun phrases. We also detect the existence of named entities and relations between named entities and noun phrases as well as contradictions between the main clause and corresponding prepositional phrase. For a satirical news, such inconsistencies often exist since satirical news usually combines irrelevant components so as to attain surprise and humor. The discrepancies are measured by cosine similarity between word components where words are represented by Glove BIBREF7. Sentence structures are derived by Flair, a state-of-the-art NLP framework, which better captures part-of-speech and named entity structures BIBREF8.
Due to the obscurity of satire genre and lacks of information given tweet form satirical news, there exists ambiguity in satirical news, which causes great difficulty to make a traditional binary decision. That is, it is difficult to classify one news as satirical or legitimate with available information. Three-way decisions, proposed by YY Yao, added an option - deferral decision in the traditional yes-and-no binary decisions and can be used to classify satirical news BIBREF9, BIBREF10. That is, one news may be classified as satirical, legitimate, and deferral. We apply rough sets model, particularly the game-theoretic rough sets to classify news into three groups, i.e., satirical, legitimate, and deferral. Game-theoretic rough set (GTRS) model, proposed by JT Yao and Herbert, is a recent promising model for decision making in the rough set context BIBREF11. GTRS determine three decision regions from a tradeoff perspective when multiple criteria are involved to evaluate the classification models BIBREF12. Games are formulated to obtain a tradeoff between involved criteria. The balanced thresholds of three decision regions can be induced from the game equilibria. GTRS have been applied in recommendation systems BIBREF13, medical decision making BIBREF14, uncertainty analysis BIBREF15, and spam filtering BIBREF16.
We apply GTRS model on our preprocessed dataset and divide all news into satirical, legitimate, or deferral regions. The probabilistic thresholds that determine three decision regions are obtained by formulating competitive games between accuracy and coverage and then finding Nash equilibrium of games. We perform extensive experiments on the collected dataset, fine-tuning the model by different discretization methods and variation of equivalent classes. The experimental result shows that the performance of the proposed model is superior compared with Pawlak rough sets model and SVM.
Related Work
Satirical news detection is an important yet challenging NLP task. Many feature based models have been proposed. Burfoot et al. extracted features of headline, profanity, and slang using word relations given by statistical metrics and lexical database BIBREF1. Rubin et al. proposed a SVM based model with five features (absurdity, humor, grammar, negative affect, and punctuation) for fake news document detection BIBREF2. Yang et al. presented linguistic features such as psycholinguistic feature based on dictionary and writing stylistic feature from part-of-speech tags distribution frequency BIBREF17. Shu et al. gave a survey in which a set of feature extraction methods is introduced for fake news on social media BIBREF3. Conroy et al. also uses social network behavior to detect fake news BIBREF18. For satirical sentence classification, Davidov et al. extract patterns using word frequency and punctuation features for tweet sentences and amazon comments BIBREF19. The detection of a certain type of sarcasm which contracts positive sentiment with a negative situation by analyzing the sentence pattern with a bootstrapped learning was also discussed BIBREF20. Although word level statistical features are widely used, with advanced word representations and state-of-the-art part-of-speech tagging and named entity recognition model, we observe that semantic features are more important than word level statistical features to model performance. Thus, we decompose the syntactic tree and use word vectors to more precisely capture the semantic inconsistencies in different structural parts of a satirical news tweet.
Recently, with the success of deep learning in NLP, many researchers attempted to detect fake news with end-to-end neural nets based approaches. Ruchansky et al. proposed a hybrid deep neural model which processes both text and user information BIBREF5, while Wang et al. proposed a neural network model that takes both text and image data BIBREF6 for detection. Sarkar et al. presented a neural network with attention to both capture sentence level and document level satire BIBREF4. Some research analyzed sarcasm from non-news text. Ghosh and Veale BIBREF21 used both the linguistic context and the psychological context information with a bi-directional LSTM to detect sarcasm in users' tweets. They also published a feedback-based dataset by collecting the responses from the tweets authors for future analysis. While all these works detect fake news given full text or image content, or target on non-news tweets, we attempt bridge the gap and detect satirical news by analyzing news tweets which concisely summarize the content of news.
Methodology
In this section, we will describe the composition and preprocessing of our dataset and introduce our model in detail. We create our dataset by collecting legitimate and satirical news tweets from different news source accounts. Our model aims to detect whether the content of a news tweet is satirical or legitimate. We first extract the semantic features based on inconsistencies in different structural parts of the tweet sentences, and then use these features to train game-theoretic rough set decision model.
Methodology ::: Dataset
We collected approximately 9,000 news tweets from satirical news sources such as The Onion and Borowitz Report and about 11,000 news tweets from legitimate new sources such as Wall Street Journal and CNN Breaking News over the past three years. Each tweet is a concise summary of a news article. The duplicated and extreme short tweets are removed.A news tweet is labeled as satirical if it is written by satirical news sources and legitimate if it is from legitimate news sources. Table TABREF2 gives an example of tweet instances that comprise our dataset.
Methodology ::: Semantic Feature Extraction
Satirical news is not based on or does not aim to state the fact. Rather, it uses parody or humor to make statement, criticisms, or just amusements. In order to achieve such effect, contradictions are greatly utilized. Therefore, inconsistencies significantly exist in different parts of a satirical news tweet. In addition, there is a lack of entity or inconsistency between entities in news satire. We extracted these features at semantic level from different sub-structures of the news tweet. Different structural parts of the sentence are derived by part-of-speech tagging and named entity recognition by Flair. The inconsistencies in different structures are measured by cosine similarity of word phrases where words are represented by Glove word vectors. We explored three different aspects of inconsistency and designed metrics for their measurements. A word level feature using tf-idf BIBREF22 is added for robustness.
Methodology ::: Semantic Feature Extraction ::: Inconsistency in Noun Phrase Structures
One way for a news satire to obtain surprise or humor effect is to combine irrelevant or less jointly used attributes and the head noun which they modified. For example, noun phrase such as “rampant accountability", “posthumous apology", “Vatican basement", “self-imposed mental construct" and other rare combinations are widely used in satirical news, while individual words themselves are common. To measure such inconsistency, we first select all leaf noun phrases (NP) extracted from the semantic trees to avoid repeated calculation. Then for each noun phrase, each adjacent word pair is selected and represented by 100-dim Glove word vector denoted as $(v_{t},w_{t})$. We define the averaged cosine similarity of noun phrase word pairs as:
where $T$ is a total number of word pairs. We use $S_{N\!P}$ as a feature to capture the overall inconsistency in noun phrase uses. $S_{N\!P}$ ranges from -1 to 1, where a smaller value indicates more significant inconsistency.
Methodology ::: Semantic Feature Extraction ::: Inconsistency Between Clauses
Another commonly used rhetoric approach for news satire is to make contradiction between the main clause and its prepositional phrase or relative clause. For instance, in the tweet “Trump boys counter Chinese currency manipulation $by$ adding extra zeros To $20 Bills.", contradiction or surprise is gained by contrasting irrelevant statements provided by different parts of the sentence. Let $q$ and $p$ denote two clauses separated by main/relative relation or preposition, and $(w_{1},w_{1},... w_{q})$ and $(v_{1},v_{1},... v_{p})$ be the vectorized words in $q$ and $p$. Then we define inconsistency between $q$ and $p$ as:
Similarly, the feature $S_{Q\!P}$ is measured by cosine similarity of linear summations of word vectors, where smaller value indicates more significant inconsistency.
Methodology ::: Semantic Feature Extraction ::: Inconsistency Between Named Entities and Noun Phrases
Even though many satirical news tweets are made based on real persons or events, most of them lack specific entities. Rather, because the news is fabricated, news writers use the words such as “man",“woman",“local man", “area woman",“local family" as subject. However, the inconsistency between named entities and noun phrases often exists in a news satire if a named entity is included. For example, the named entity “Andrew Yang" and the noun phrases “time vortex" show great inconsistency than “President Trump", "Senate Republicans", and “White House" do in the legitimate news “President Trump invites Senate Republicans to the White House to talk about the funding bill." We define such inconsistency as a categorical feature that:
$S_{N\! E\! R\! N}$ is the cosine similarity of named entities and noun phrases of a certain sentence and $\bar{S}_{N\! E\! R\! N}$ is the mean value of $S_{N\! E\! R\! N}$ in corpus.
Methodology ::: Semantic Feature Extraction ::: Word Level Feature Using TF-IDF
We calculated the difference of tf-idf scores between legitimate news corpus and satirical news corpus for each single word. Then, the set $S_{voc}$ that includes most representative legitimate news words is created by selecting top 100 words given the tf-idf difference. For a news tweet and any word $w$ in the tweet, we define the binary feature $B_{voc}$ as:
Methodology ::: GTRS Decision Model
We construct a Game-theoretic Rough Sets model for classification given the extracted features. Suppose $E\subseteq U \times U$ is an equivalence relation on a finite nonempty universe of objects $U$, where $E$ is reflexive, symmetric, and transitive. The equivalence class containing an object $x$ is given by $[x]=\lbrace y\in U|xEy\rbrace $. The objects in one equivalence class all have the same attribute values. In the satirical news context, given an undefined concept $satire$, probabilistic rough sets divide all news into three pairwise disjoint groups i.e., the satirical group $POS(satire)$, legitimate group $NEG(satire)$, and deferral group $BND(satire)$, by using the conditional probability $Pr(satire|[x]) = \frac{|satire\cap [x]|}{|[x]|}$ as the evaluation function, and $(\alpha ,\beta )$ as the acceptance and rejection thresholds BIBREF23, BIBREF9, BIBREF10, that is,
Given an equivalence class $[x]$, if the conditional probability $Pr(satire|[x])$ is greater than or equal to the specified acceptance threshold $\alpha $, i.e., $Pr(satire|[x])\ge \alpha $, we accept the news in $[x]$ as $satirical$. If $Pr(satire|[x])$ is less than or equal to the specified rejection threshold $\beta $, i.e., $Pr(satire|[x])\le \beta $ we reject the news in $[x]$ as $satirical$, or we accept the news in $[x]$ as $legitimate$. If $Pr(satire|[x])$ is between $\alpha $ and $\beta $, i.e., $\beta <Pr(satire|[x])<\alpha $, we defer to make decisions on the news in $[x]$. Pawlak rough sets can be viewed as a special case of probabilistic rough sets with $(\alpha ,\beta )=(1,0)$.
Given a pair of probabilistic thresholds $(\alpha , \beta )$, we can obtain a news classifier according to Equation (DISPLAY_FORM13). The three regions are a partition of the universe $U$,
Then, the accuracy and coverage rate to evaluate the performance of the derived classifier are defined as follows BIBREF12,
The criterion coverage indicates the proportions of news that can be confidently classified. Next, we will obtain $(\alpha , \beta )$ by game formulation and repetition learning.
Methodology ::: GTRS Decision Model ::: Game Formulation
We construct a game $G=\lbrace O,S,u\rbrace $ given the set of game players $O$, the set of strategy profile $S$, and the payoff functions $u$, where the accuracy and coverage are two players, respectively, i.e., $O=\lbrace acc, cov\rbrace $.
The set of strategy profiles $S=S_{acc}\times S_{cov}$, where $S_{acc}$ and $S_{cov} $ are sets of possible strategies or actions performed by players $acc$ and $cov$. The initial thresholds are set as $(1,0)$. All these strategies are the changes made on the initial thresholds,
$c_{acc}$ and $c_{cov}$ denote the change steps used by two players, and their values are determined by the concrete experiment date set.
Payoff functions. The payoffs of players are $u=(u_{acc},u_{cov})$, and $u_{acc}$ and $u_{cov}$ denote the payoff functions of players $acc$ and $cov$, respectively. Given a strategy profile $p=(s, t)$ with player $acc$ performing $s$ and player $cov$ performing $t$, the payoffs of $acc$ and $cov$ are $u_{acc}(s, t)$ and $u_{cov}(s, t)$. We use $u_{acc}(\alpha ,\beta )$ and $u_{cov}(\alpha ,\beta )$ to show this relationship. The payoff functions $u_{acc}(\alpha ,\beta )$ and $u_{cov}(\alpha ,\beta )$ are defined as,
where $Acc_{(\alpha , \beta )}(Satire)$ and $Cov_{(\alpha , \beta )}(Satire)$ are the accuracy and coverage defined in Equations (DISPLAY_FORM15) and (DISPLAY_FORM16).
Payoff table. We use payoff tables to represent the formulated game. Table TABREF20 shows a payoff table example in which both players have 3 strategies defined in Equation refeq:stategies.
The arrow $\downarrow $ denotes decreasing a value and $\uparrow $ denotes increasing a value. On each cell, the threshold values are determined by two players.
Methodology ::: GTRS Decision Model ::: Repetition Learning Mechanism
We repeat the game with the new thresholds until a balanced solution is reached. We first analyzes the pure strategy equilibrium of the game and then check if the stopping criteria are satisfied.
Game equilibrium. The game solution of pure strategy Nash equilibrium is used to determine possible game outcomes in GTRS. The strategy profile $(s_{i},t_{j})$ is a pure strategy Nash equilibrium, if
This means that none of players would like to change his strategy or they would loss benefit if deriving from this strategy profile, provided this player has the knowledge of other player's strategy.
Repetition of games. Assuming that we formulate a game, in which the initial thresholds are $(\alpha , \beta )$, and the equilibrium analysis shows that the thresholds corresponding to the equilibrium are $(\alpha ^{*}, \beta ^{*})$. If the thresholds $(\alpha ^{*}, \beta ^{*})$ do not satisfy the stopping criterion, we will update the initial thresholds in the subsequent games. The initial thresholds of the new game will be set as $(\alpha ^{*}, \beta ^{*})$. If the thresholds $(\alpha ^{*}, \beta ^{*})$ satisfy the stopping criterion, we may stop the repetition of games.
Stopping criterion. We define the stopping criteria so that the iterations of games can stop at a proper time. In this research, we set the stopping criterion as within the range of thresholds, the increase of one player's payoff is less than the decrease of the other player's payoff.
Experiments
There are 8757 news records in our preprocessed data set. We use Jenks natural breaks BIBREF24 to discretize continuous variables $S_{N\!P}$ and $S_{Q\!P}$ both into five categories denoted by nominal values from 0 to 4, where larger values still fall into bins with larger nominal value. Let $D_{N\!P}$ and $D_{Q\!P}$ denote the discretized variables $S_{N\!P}$ and $S_{Q\!P}$, respectively. We derived the information table that only contains discrete features from our original dataset. A fraction of the information table is shown in Table TABREF23.
The news whose condition attributes have the same values are classified in an equivalence class $X_i$. We derived 149 equivalence classes and calculated the corresponding probability $Pr(X_i)$ and condition probability $Pr(Satire|X_i)$ for each $X_i$. The probability $Pr(X_{i})$ denotes the ratio of the number of news contained in the equivalence class $X_i$ to the total number of news in the dataset, while the conditional probability $Pr(Satire|X_{i})$ is the proportion of news in $X_i$ that are satirical. We combine the equivalence classes with the same conditional probability and reduce the number of equivalence classes to 108. Table TABREF24 shows a part of the probabilistic data information about the concept satire.
Experiments ::: Finding Thresholds with GTRS
We formulated a competitive game between the criteria accuracy and coverage to obtain the balanced probabilistic thresholds with the initial thresholds $(\alpha , \beta )=(1,0)$ and learning rate 0.03. As shown in the payoff table Table TABREF26,
the cell at the right bottom corner is the game equilibrium whose strategy profile is ($\beta $ increases 0.06, $\alpha $ decreases 0.06). The payoffs of the players are (0.9784,0.3343). We set the stopping criterion as the increase of one player's payoff is less than the decrease of the other player's payoff when the thresholds are within the range. When the thresholds change from (1,0) to (0.94, 0.06), the accuracy is decreased from 1 to 0.9784 but the coverage is increased from 0.0795 to 0.3343. We repeat the game by setting $(0.94, 0.06)$ as the next initial thresholds.
The competitive games are repeated seven times. The result is shown in Table TABREF27.
After the eighth iteration, the repetition of game is stopped because the further changes on thresholds may cause the thresholds lay outside of the range $0 < \beta < \alpha <1$, and the final result is the equilibrium of the seventh game $(\alpha , \beta )=(0.52, 0.48)$.
Experiments ::: Results
We compare Pawlak rough sets, SVM, and our GTRS approach on the proposed dataset. Table TABREF29 shows the results on the experimental data.
The SVM classifier achieved an accuracy of $78\%$ with a $100\%$ coverage. The Pawlak rough set model using $(\alpha , \beta )=(1,0)$ achieves a $100\%$ accuracy and a coverage ratio of $7.95\%$, which means it can only classify $7.95\%$ of the data. The classifier constructed by GTRS with $(\alpha , \beta )=(0.52, 0.48)$ reached an accuracy $82.71\%$ and a coverage $97.49\%$. which indicates that $97.49\%$ of data are able to be classified with accuracy of $82.71\%$. The remaining $2.51\%$ of data can not be classified without providing more information. To make our method comparable to other baselines such as SVM, we assume random guessing is made on the deferral region and present the modified accuracy. The modified accuracy for our approach is then $0.8271\times 0.9749 + 0.5 \times 0.0251 =81.89\%$. Our methods shows significant improvement as compared to Pawlak model and SVM.
Conclusion
In this paper, we propose a satirical news detection approach based on extracted semantic features and game-theoretic rough sets. In our mode, the semantic features extraction captures the inconsistency in the different structural parts of the sentences and the GTRS classifier can process the incomplete information based on repetitive learning and the acceptance and rejection thresholds. The experimental results on our created satirical and legitimate news tweets dataset show that our model significantly outperforms Pawlak rough set model and SVM. In particular, we demonstrate our model's ability to interpret satirical news detection from a semantic and information trade-off perspective. Other interesting extensions of our paper may be to use rough set models to extract the linguistic features at document level. | 8757 news records |
de4e180f49ff187abc519d01eff14ebcd8149cad | de4e180f49ff187abc519d01eff14ebcd8149cad_0 | Q: What features do they extract?
Text: Introduction
Satirical news, which uses parody characterized in a conventional news style, has now become an entertainment on social media. While news satire is claimed to be pure comedic and of amusement, it makes statements on real events often with the aim of attaining social criticism and influencing change BIBREF0. Satirical news can also be misleading to readers, even though it is not designed for falsifications. Given such sophistication, satirical news detection is a necessary yet challenging natural language processing (NLP) task. Many feature based fake or satirical news detection systems BIBREF1, BIBREF2, BIBREF3 extract features from word relations given by statistics or lexical database, and other linguistic features. In addition, with the great success of deep learning in NLP in recent years, many end-to-end neural nets based detection systems BIBREF4, BIBREF5, BIBREF6 have been proposed and delivered promising results on satirical news article detection.
However, with the evolution of fast-paced social media, satirical news has been condensed into a satirical-news-in-one-sentence form. For example, one single tweet of “If earth continues to warm at current rate moon will be mostly underwater by 2400" by The Onion is largely consumed and spread by social media users than the corresponding full article posted on The Onion website. Existing detection systems trained on full document data might not be applicable to such form of satirical news. Therefore, we collect news tweets from satirical news sources such as The Onion, The New Yorker (Borowitz Report) and legitimate news sources such as Wall Street Journal and CNN Breaking News. We explore the syntactic tree of the sentence and extract inconsistencies between attributes and head noun in noun phrases. We also detect the existence of named entities and relations between named entities and noun phrases as well as contradictions between the main clause and corresponding prepositional phrase. For a satirical news, such inconsistencies often exist since satirical news usually combines irrelevant components so as to attain surprise and humor. The discrepancies are measured by cosine similarity between word components where words are represented by Glove BIBREF7. Sentence structures are derived by Flair, a state-of-the-art NLP framework, which better captures part-of-speech and named entity structures BIBREF8.
Due to the obscurity of satire genre and lacks of information given tweet form satirical news, there exists ambiguity in satirical news, which causes great difficulty to make a traditional binary decision. That is, it is difficult to classify one news as satirical or legitimate with available information. Three-way decisions, proposed by YY Yao, added an option - deferral decision in the traditional yes-and-no binary decisions and can be used to classify satirical news BIBREF9, BIBREF10. That is, one news may be classified as satirical, legitimate, and deferral. We apply rough sets model, particularly the game-theoretic rough sets to classify news into three groups, i.e., satirical, legitimate, and deferral. Game-theoretic rough set (GTRS) model, proposed by JT Yao and Herbert, is a recent promising model for decision making in the rough set context BIBREF11. GTRS determine three decision regions from a tradeoff perspective when multiple criteria are involved to evaluate the classification models BIBREF12. Games are formulated to obtain a tradeoff between involved criteria. The balanced thresholds of three decision regions can be induced from the game equilibria. GTRS have been applied in recommendation systems BIBREF13, medical decision making BIBREF14, uncertainty analysis BIBREF15, and spam filtering BIBREF16.
We apply GTRS model on our preprocessed dataset and divide all news into satirical, legitimate, or deferral regions. The probabilistic thresholds that determine three decision regions are obtained by formulating competitive games between accuracy and coverage and then finding Nash equilibrium of games. We perform extensive experiments on the collected dataset, fine-tuning the model by different discretization methods and variation of equivalent classes. The experimental result shows that the performance of the proposed model is superior compared with Pawlak rough sets model and SVM.
Related Work
Satirical news detection is an important yet challenging NLP task. Many feature based models have been proposed. Burfoot et al. extracted features of headline, profanity, and slang using word relations given by statistical metrics and lexical database BIBREF1. Rubin et al. proposed a SVM based model with five features (absurdity, humor, grammar, negative affect, and punctuation) for fake news document detection BIBREF2. Yang et al. presented linguistic features such as psycholinguistic feature based on dictionary and writing stylistic feature from part-of-speech tags distribution frequency BIBREF17. Shu et al. gave a survey in which a set of feature extraction methods is introduced for fake news on social media BIBREF3. Conroy et al. also uses social network behavior to detect fake news BIBREF18. For satirical sentence classification, Davidov et al. extract patterns using word frequency and punctuation features for tweet sentences and amazon comments BIBREF19. The detection of a certain type of sarcasm which contracts positive sentiment with a negative situation by analyzing the sentence pattern with a bootstrapped learning was also discussed BIBREF20. Although word level statistical features are widely used, with advanced word representations and state-of-the-art part-of-speech tagging and named entity recognition model, we observe that semantic features are more important than word level statistical features to model performance. Thus, we decompose the syntactic tree and use word vectors to more precisely capture the semantic inconsistencies in different structural parts of a satirical news tweet.
Recently, with the success of deep learning in NLP, many researchers attempted to detect fake news with end-to-end neural nets based approaches. Ruchansky et al. proposed a hybrid deep neural model which processes both text and user information BIBREF5, while Wang et al. proposed a neural network model that takes both text and image data BIBREF6 for detection. Sarkar et al. presented a neural network with attention to both capture sentence level and document level satire BIBREF4. Some research analyzed sarcasm from non-news text. Ghosh and Veale BIBREF21 used both the linguistic context and the psychological context information with a bi-directional LSTM to detect sarcasm in users' tweets. They also published a feedback-based dataset by collecting the responses from the tweets authors for future analysis. While all these works detect fake news given full text or image content, or target on non-news tweets, we attempt bridge the gap and detect satirical news by analyzing news tweets which concisely summarize the content of news.
Methodology
In this section, we will describe the composition and preprocessing of our dataset and introduce our model in detail. We create our dataset by collecting legitimate and satirical news tweets from different news source accounts. Our model aims to detect whether the content of a news tweet is satirical or legitimate. We first extract the semantic features based on inconsistencies in different structural parts of the tweet sentences, and then use these features to train game-theoretic rough set decision model.
Methodology ::: Dataset
We collected approximately 9,000 news tweets from satirical news sources such as The Onion and Borowitz Report and about 11,000 news tweets from legitimate new sources such as Wall Street Journal and CNN Breaking News over the past three years. Each tweet is a concise summary of a news article. The duplicated and extreme short tweets are removed.A news tweet is labeled as satirical if it is written by satirical news sources and legitimate if it is from legitimate news sources. Table TABREF2 gives an example of tweet instances that comprise our dataset.
Methodology ::: Semantic Feature Extraction
Satirical news is not based on or does not aim to state the fact. Rather, it uses parody or humor to make statement, criticisms, or just amusements. In order to achieve such effect, contradictions are greatly utilized. Therefore, inconsistencies significantly exist in different parts of a satirical news tweet. In addition, there is a lack of entity or inconsistency between entities in news satire. We extracted these features at semantic level from different sub-structures of the news tweet. Different structural parts of the sentence are derived by part-of-speech tagging and named entity recognition by Flair. The inconsistencies in different structures are measured by cosine similarity of word phrases where words are represented by Glove word vectors. We explored three different aspects of inconsistency and designed metrics for their measurements. A word level feature using tf-idf BIBREF22 is added for robustness.
Methodology ::: Semantic Feature Extraction ::: Inconsistency in Noun Phrase Structures
One way for a news satire to obtain surprise or humor effect is to combine irrelevant or less jointly used attributes and the head noun which they modified. For example, noun phrase such as “rampant accountability", “posthumous apology", “Vatican basement", “self-imposed mental construct" and other rare combinations are widely used in satirical news, while individual words themselves are common. To measure such inconsistency, we first select all leaf noun phrases (NP) extracted from the semantic trees to avoid repeated calculation. Then for each noun phrase, each adjacent word pair is selected and represented by 100-dim Glove word vector denoted as $(v_{t},w_{t})$. We define the averaged cosine similarity of noun phrase word pairs as:
where $T$ is a total number of word pairs. We use $S_{N\!P}$ as a feature to capture the overall inconsistency in noun phrase uses. $S_{N\!P}$ ranges from -1 to 1, where a smaller value indicates more significant inconsistency.
Methodology ::: Semantic Feature Extraction ::: Inconsistency Between Clauses
Another commonly used rhetoric approach for news satire is to make contradiction between the main clause and its prepositional phrase or relative clause. For instance, in the tweet “Trump boys counter Chinese currency manipulation $by$ adding extra zeros To $20 Bills.", contradiction or surprise is gained by contrasting irrelevant statements provided by different parts of the sentence. Let $q$ and $p$ denote two clauses separated by main/relative relation or preposition, and $(w_{1},w_{1},... w_{q})$ and $(v_{1},v_{1},... v_{p})$ be the vectorized words in $q$ and $p$. Then we define inconsistency between $q$ and $p$ as:
Similarly, the feature $S_{Q\!P}$ is measured by cosine similarity of linear summations of word vectors, where smaller value indicates more significant inconsistency.
Methodology ::: Semantic Feature Extraction ::: Inconsistency Between Named Entities and Noun Phrases
Even though many satirical news tweets are made based on real persons or events, most of them lack specific entities. Rather, because the news is fabricated, news writers use the words such as “man",“woman",“local man", “area woman",“local family" as subject. However, the inconsistency between named entities and noun phrases often exists in a news satire if a named entity is included. For example, the named entity “Andrew Yang" and the noun phrases “time vortex" show great inconsistency than “President Trump", "Senate Republicans", and “White House" do in the legitimate news “President Trump invites Senate Republicans to the White House to talk about the funding bill." We define such inconsistency as a categorical feature that:
$S_{N\! E\! R\! N}$ is the cosine similarity of named entities and noun phrases of a certain sentence and $\bar{S}_{N\! E\! R\! N}$ is the mean value of $S_{N\! E\! R\! N}$ in corpus.
Methodology ::: Semantic Feature Extraction ::: Word Level Feature Using TF-IDF
We calculated the difference of tf-idf scores between legitimate news corpus and satirical news corpus for each single word. Then, the set $S_{voc}$ that includes most representative legitimate news words is created by selecting top 100 words given the tf-idf difference. For a news tweet and any word $w$ in the tweet, we define the binary feature $B_{voc}$ as:
Methodology ::: GTRS Decision Model
We construct a Game-theoretic Rough Sets model for classification given the extracted features. Suppose $E\subseteq U \times U$ is an equivalence relation on a finite nonempty universe of objects $U$, where $E$ is reflexive, symmetric, and transitive. The equivalence class containing an object $x$ is given by $[x]=\lbrace y\in U|xEy\rbrace $. The objects in one equivalence class all have the same attribute values. In the satirical news context, given an undefined concept $satire$, probabilistic rough sets divide all news into three pairwise disjoint groups i.e., the satirical group $POS(satire)$, legitimate group $NEG(satire)$, and deferral group $BND(satire)$, by using the conditional probability $Pr(satire|[x]) = \frac{|satire\cap [x]|}{|[x]|}$ as the evaluation function, and $(\alpha ,\beta )$ as the acceptance and rejection thresholds BIBREF23, BIBREF9, BIBREF10, that is,
Given an equivalence class $[x]$, if the conditional probability $Pr(satire|[x])$ is greater than or equal to the specified acceptance threshold $\alpha $, i.e., $Pr(satire|[x])\ge \alpha $, we accept the news in $[x]$ as $satirical$. If $Pr(satire|[x])$ is less than or equal to the specified rejection threshold $\beta $, i.e., $Pr(satire|[x])\le \beta $ we reject the news in $[x]$ as $satirical$, or we accept the news in $[x]$ as $legitimate$. If $Pr(satire|[x])$ is between $\alpha $ and $\beta $, i.e., $\beta <Pr(satire|[x])<\alpha $, we defer to make decisions on the news in $[x]$. Pawlak rough sets can be viewed as a special case of probabilistic rough sets with $(\alpha ,\beta )=(1,0)$.
Given a pair of probabilistic thresholds $(\alpha , \beta )$, we can obtain a news classifier according to Equation (DISPLAY_FORM13). The three regions are a partition of the universe $U$,
Then, the accuracy and coverage rate to evaluate the performance of the derived classifier are defined as follows BIBREF12,
The criterion coverage indicates the proportions of news that can be confidently classified. Next, we will obtain $(\alpha , \beta )$ by game formulation and repetition learning.
Methodology ::: GTRS Decision Model ::: Game Formulation
We construct a game $G=\lbrace O,S,u\rbrace $ given the set of game players $O$, the set of strategy profile $S$, and the payoff functions $u$, where the accuracy and coverage are two players, respectively, i.e., $O=\lbrace acc, cov\rbrace $.
The set of strategy profiles $S=S_{acc}\times S_{cov}$, where $S_{acc}$ and $S_{cov} $ are sets of possible strategies or actions performed by players $acc$ and $cov$. The initial thresholds are set as $(1,0)$. All these strategies are the changes made on the initial thresholds,
$c_{acc}$ and $c_{cov}$ denote the change steps used by two players, and their values are determined by the concrete experiment date set.
Payoff functions. The payoffs of players are $u=(u_{acc},u_{cov})$, and $u_{acc}$ and $u_{cov}$ denote the payoff functions of players $acc$ and $cov$, respectively. Given a strategy profile $p=(s, t)$ with player $acc$ performing $s$ and player $cov$ performing $t$, the payoffs of $acc$ and $cov$ are $u_{acc}(s, t)$ and $u_{cov}(s, t)$. We use $u_{acc}(\alpha ,\beta )$ and $u_{cov}(\alpha ,\beta )$ to show this relationship. The payoff functions $u_{acc}(\alpha ,\beta )$ and $u_{cov}(\alpha ,\beta )$ are defined as,
where $Acc_{(\alpha , \beta )}(Satire)$ and $Cov_{(\alpha , \beta )}(Satire)$ are the accuracy and coverage defined in Equations (DISPLAY_FORM15) and (DISPLAY_FORM16).
Payoff table. We use payoff tables to represent the formulated game. Table TABREF20 shows a payoff table example in which both players have 3 strategies defined in Equation refeq:stategies.
The arrow $\downarrow $ denotes decreasing a value and $\uparrow $ denotes increasing a value. On each cell, the threshold values are determined by two players.
Methodology ::: GTRS Decision Model ::: Repetition Learning Mechanism
We repeat the game with the new thresholds until a balanced solution is reached. We first analyzes the pure strategy equilibrium of the game and then check if the stopping criteria are satisfied.
Game equilibrium. The game solution of pure strategy Nash equilibrium is used to determine possible game outcomes in GTRS. The strategy profile $(s_{i},t_{j})$ is a pure strategy Nash equilibrium, if
This means that none of players would like to change his strategy or they would loss benefit if deriving from this strategy profile, provided this player has the knowledge of other player's strategy.
Repetition of games. Assuming that we formulate a game, in which the initial thresholds are $(\alpha , \beta )$, and the equilibrium analysis shows that the thresholds corresponding to the equilibrium are $(\alpha ^{*}, \beta ^{*})$. If the thresholds $(\alpha ^{*}, \beta ^{*})$ do not satisfy the stopping criterion, we will update the initial thresholds in the subsequent games. The initial thresholds of the new game will be set as $(\alpha ^{*}, \beta ^{*})$. If the thresholds $(\alpha ^{*}, \beta ^{*})$ satisfy the stopping criterion, we may stop the repetition of games.
Stopping criterion. We define the stopping criteria so that the iterations of games can stop at a proper time. In this research, we set the stopping criterion as within the range of thresholds, the increase of one player's payoff is less than the decrease of the other player's payoff.
Experiments
There are 8757 news records in our preprocessed data set. We use Jenks natural breaks BIBREF24 to discretize continuous variables $S_{N\!P}$ and $S_{Q\!P}$ both into five categories denoted by nominal values from 0 to 4, where larger values still fall into bins with larger nominal value. Let $D_{N\!P}$ and $D_{Q\!P}$ denote the discretized variables $S_{N\!P}$ and $S_{Q\!P}$, respectively. We derived the information table that only contains discrete features from our original dataset. A fraction of the information table is shown in Table TABREF23.
The news whose condition attributes have the same values are classified in an equivalence class $X_i$. We derived 149 equivalence classes and calculated the corresponding probability $Pr(X_i)$ and condition probability $Pr(Satire|X_i)$ for each $X_i$. The probability $Pr(X_{i})$ denotes the ratio of the number of news contained in the equivalence class $X_i$ to the total number of news in the dataset, while the conditional probability $Pr(Satire|X_{i})$ is the proportion of news in $X_i$ that are satirical. We combine the equivalence classes with the same conditional probability and reduce the number of equivalence classes to 108. Table TABREF24 shows a part of the probabilistic data information about the concept satire.
Experiments ::: Finding Thresholds with GTRS
We formulated a competitive game between the criteria accuracy and coverage to obtain the balanced probabilistic thresholds with the initial thresholds $(\alpha , \beta )=(1,0)$ and learning rate 0.03. As shown in the payoff table Table TABREF26,
the cell at the right bottom corner is the game equilibrium whose strategy profile is ($\beta $ increases 0.06, $\alpha $ decreases 0.06). The payoffs of the players are (0.9784,0.3343). We set the stopping criterion as the increase of one player's payoff is less than the decrease of the other player's payoff when the thresholds are within the range. When the thresholds change from (1,0) to (0.94, 0.06), the accuracy is decreased from 1 to 0.9784 but the coverage is increased from 0.0795 to 0.3343. We repeat the game by setting $(0.94, 0.06)$ as the next initial thresholds.
The competitive games are repeated seven times. The result is shown in Table TABREF27.
After the eighth iteration, the repetition of game is stopped because the further changes on thresholds may cause the thresholds lay outside of the range $0 < \beta < \alpha <1$, and the final result is the equilibrium of the seventh game $(\alpha , \beta )=(0.52, 0.48)$.
Experiments ::: Results
We compare Pawlak rough sets, SVM, and our GTRS approach on the proposed dataset. Table TABREF29 shows the results on the experimental data.
The SVM classifier achieved an accuracy of $78\%$ with a $100\%$ coverage. The Pawlak rough set model using $(\alpha , \beta )=(1,0)$ achieves a $100\%$ accuracy and a coverage ratio of $7.95\%$, which means it can only classify $7.95\%$ of the data. The classifier constructed by GTRS with $(\alpha , \beta )=(0.52, 0.48)$ reached an accuracy $82.71\%$ and a coverage $97.49\%$. which indicates that $97.49\%$ of data are able to be classified with accuracy of $82.71\%$. The remaining $2.51\%$ of data can not be classified without providing more information. To make our method comparable to other baselines such as SVM, we assume random guessing is made on the deferral region and present the modified accuracy. The modified accuracy for our approach is then $0.8271\times 0.9749 + 0.5 \times 0.0251 =81.89\%$. Our methods shows significant improvement as compared to Pawlak model and SVM.
Conclusion
In this paper, we propose a satirical news detection approach based on extracted semantic features and game-theoretic rough sets. In our mode, the semantic features extraction captures the inconsistency in the different structural parts of the sentences and the GTRS classifier can process the incomplete information based on repetitive learning and the acceptance and rejection thresholds. The experimental results on our created satirical and legitimate news tweets dataset show that our model significantly outperforms Pawlak rough set model and SVM. In particular, we demonstrate our model's ability to interpret satirical news detection from a semantic and information trade-off perspective. Other interesting extensions of our paper may be to use rough set models to extract the linguistic features at document level. | Inconsistency in Noun Phrase Structures, Inconsistency Between Clauses, Inconsistency Between Named Entities and Noun Phrases, Word Level Feature Using TF-IDF |
bdc1f37c8b5e96e3c29cc02dae4ce80087d83284 | bdc1f37c8b5e96e3c29cc02dae4ce80087d83284_0 | Q: What they use as a metric of finding hot spots in meeting?
Text: Introduction and Prior Work
A definition of the meeting “hot spots” was first introduced in BIBREF2, where it was investigated whether human annotators could reliably identify regions in which participants are “highly involved in the discussion”. The motivation was that meetings generally have low information density and are tedious to review verbatim after the fact. An automatic system that could detect regions of high interest (as indicated by the involvement of the participants during the meeting) would thus be useful. Relatedly, automatic meeting summarization could also benefit from such information to give extra weight to hot spot regions in selecting or abstracting material for inclusion in the summary. Later work on the relationship between involvement and summarization BIBREF3 defined a different approach: hot spots are those regions chosen for inclusion in a summary by human annotators (“summarization hot spots”). In the present work we stick with the original “involvement hot spot” notion, and refer to such regions simply as “hot spots”, regardless of their possible role in summarization. We note that high involvement may be triggered both by a meeting's content (“what is being talked about”, and “what may be included in a textual summary”), as well as behavioral and social factors, such as a desire to participate, to stake out a position, or to oppose another participant. As related notion in dialog system research is “level of interest” BIBREF4.
The initial research on hot spots focused on the reliability of human annotators and correlations with certain low-level acoustic features, such as pitch BIBREF2. Also investigated were the correlation between hot spots and dialog acts BIBREF5 and hot spots and speaker overlap BIBREF6, without however conducting experiments in automatic hot spot prediction using machine learning techniques. Laskowski BIBREF7 redefined the hot spot annotations in terms of time-based windows over meetings, and investigated various classifier models to detect “hotness” (i.e., elevated involvement). However, that work focused on only two types of speech features: presence of laughter and the temporal patterns of speech activity across the various participants, both of which were found to be predictive of involvement.
For the related problem of level-of-interest prediction in dialog systems BIBREF8, it was found that content-based classification can also be effective, using both a discriminative TF-IDF model and lexical affect scores, as well as prosodic features. In line with the earlier hot spot research on interaction patterns and speaker overlap, turn-taking features were shown to be helpful for spotting summarization hot spots, in BIBREF3, and even more so than the human involvement annotations. The latter result confirms our intuition that summarization-worthiness and involvement are different notions of “hotness”.
In this paper, following Laskowski, we focus on the automatic prediction of the speakers' involvement in sliding-time windows/segments. We evaluate machine learning models based on a range of features that can be extracted automatically from audio recordings, either directly via signal processing or via the use of automatic transcriptions (ASR outputs). In particular, we investigate the relative contributions of three classes of information:
low-level acoustic-prosodic features, such as those commonly used in other paralinguistic tasks, such as sentiment analysis (extracted using openSMILE BIBREF0);
spoken word content, as encoded with a state-of-the-art lexical embedding approach such as BERT BIBREF1;
speaker interaction, based on speech activity over time and across different speakers.
We attach lower importance to laughter, even though it was found to be highly predictive of involvement in the ICSI corpus, partly because we believe it would not transfer well to more general types of (e.g., business) meetings, and partly because laughter detection is still a hard problem in itself BIBREF9. Generation of speaker-attributed meeting transcriptions, on the other hand, has seen remarkable progress BIBREF10 and could support the features we focus on here.
Data
The ICSI Meeting Corpus BIBREF11 is a collection of meeting recordings that has been thoroughly annotated, including annotations for involvement hot spots BIBREF12, linguistic utterance units, and word time boundaries based on forced alignment. The dataset is comprised of 75 meetings and about 70 hours of real-time audio duration, with 6 speakers per meeting on average. Most of the participants are well-acquainted and friendly with each other. Hot spots were originally annotated with 8 levels and degrees, ranging from `not hot' to `luke warm' to `hot +'. Every utterance was labeled with one of these discrete labels by a single annotator. Hightened involvement is rare, being marked on only 1% of utterances.
Due to the severe imbalance in the label distribution, Laskowski BIBREF13 proposed extending the involvement, or hotness, labels to sliding time windows. In our implementation (details below), this resulted in 21.7% of samples (windows) being labeled as “involved”.
We split the corpus into three subsets: training, development, and evaluation, keeping meetings intact. Table TABREF4 gives statistics of these partitions.
We were concerned with the relatively small number of meetings in the test sets, and repeated several of our experiments with a (jackknifing) cross-validation setup over the training set. The results obtained were very similar to those with the fixed train/test split results that we report here.
Data ::: Time Windowing
As stated above, the corpus was originally labeled for hot spots at the utterance level, where involvement was marked by either a `b' or a `b+' label. Training and test samples for our experiments correspond to 60 s-long sliding windows, with a 15 s step size. If a certain window, e.g., a segment spanning the times 15 s ...75 s, overlaps with any involved speech utterance, then we label that whole window as `hot'. Fig. FIGREF6 gives a visual representation.
Data ::: Metric
In spite of the windowing approach, the class distribution is still skewed, and an accuracy metric would reflect the particular class distribution in our data set. Therefore, we adopt the unweighted average recall (UAR) metric commonly used in emotion classification research. UAR is a reweighted accuracy where the samples of both classes are weighted equally in aggregate. UAR thus simulates a uniform class distribution. To match the objective, our classifiers are trained on appropriately weighted training data. Note that chance performance for UAR is by definition 50%, making results more comparable across different data sets.
Feature Description ::: Acoustic-Prosodic Features
Prosody encompasses pitch, energy, and durational features of speech. Prosody is thought to convey emphasis, sentiment, and emotion, all of which are presumably correlated with expressions of involvement. We used the openSMILE toolkit BIBREF0 to compute 988 features as defined by the emobase988 configuration file, operating on the close-talking meeting recordings. This feature set consists of low-level descriptors such as intensity, loudness, Mel-frequency cepstral coefficients, and pitch. For each low-level descriptor, functionals such as max/min value, mean, standard deviation, kurtosis, and skewness are computed. Finally, global mean and variance normalization are applied to each feature, using training set statistics. The feature vector thus captures acoustic-prosodic features aggregated over what are typically utterances. We tried extracting openSMILE features directly from 60 s windows, but found better results by extracting subwindows of 5 s, followed by pooling over the longer 60 s duration. We attribute this to the fact that emobase features are designed to operate on individual utterances, which have durations closer to 5 s than 60 s.
Feature Description ::: Word-Based Features ::: Bag of words with TF-IDF
Initially, we investigated a simple bag-of-words model including all unigrams, bigrams, and trigrams found in the training set. Occurrences of the top 10,000 n-grams were encoded to form a 10,000-dimensional vector, with values weighted according to TD-IDF. TF-IDF weights n-grams according to both their frequency (TF) and their salience (inverse document frequency, IDF) in the data, where each utterance was treated as a separate document. The resulting feature vectors are very sparse.
Feature Description ::: Word-Based Features ::: Embeddings
The ICSI dataset is too small to train a neural embedding model from scratch. Therefore, it is convenient to use the pre-trained BERT embedding architecture BIBREF1 to create an utterance-level embedding vector for each region of interest. Having been trained on a large text corpus, the resulting embeddings encode semantic similarities among utterances, and would enable generalization from word patterns seen in the ICSI training data to those that have not been observed on that limited corpus.
We had previously also created an adapted version of the BERT model, tuned to to perform utterance-level sentiment classification, on a separate dataset BIBREF14. As proposed in BIBREF1, we fine-tuned all layers of the pre-trained BERT model by adding a single fully-connected layer and classifying using only the embedding corresponding to the classification ([CLS]) token prepended to each utterance. The difference in UAR between the hot spot classifiers using the pre-trained embeddings and those using the sentiment-adapted embeddings is small. Since the classifier using embeddings extracted by the sentiment-adapted model yielded slightly better performance, we report all results using these as input.
To obtain a single embedding for each 60 s window, we experimented with various approaches of pooling the token and utterance-level embeddings. For our first approach, we ignored the ground-truth utterance segmentation and speaker information. We merged all words spoken within a particular window into a single contiguous span. Following BIBREF1, we added the appropriate classification and separation tokens to the text and selected the embedding corresponding to the [CLS] token as the window-level embedding. Our second approach used the ground-truth segmentation of the dialogue. Each speaker turn was independently modeled, and utterance-level embeddings were extracted using the representation corresponding to the [CLS] token. Utterances that cross window boundaries are truncated using the word timestamps, so only words spoken within the given time window are considered. For all reported experiments, we use L2-norm pooling to form the window-level embeddings for the final classifier, as this performed better than either mean or max pooling.
Feature Description ::: Speaker Activity Features
These features were a compilation of three different feature types:
Speaker overlap percentages: Based on the available word-level times, we computed a 6-dimensional feature vector, where the $i$th index indicates the fraction of time that $i$ or more speakers are talking within a given window. This can be expressed by $\frac{t_i}{60}$ with $t_i$ indicating the time in seconds that $i$ or more people were speaking at the same time.
Unique speaker count: Counts the unique speakers within a window, as a useful metric to track the diversity of participation within a certain window.
Turn switch count: Counts the number of times a speaker begins talking within a window. This is a similar metric to the number of utterances. However, unlike utterance count, turn switches can be computed entirely from speech activity, without requiring a linguistic segmentation.
Feature Description ::: Laughter Count
Laskowski found that laughter is highly predictive of involvement in the ICSI data. Laughter is annotated on an utterance level and falls into two categories: laughter solely on its own (no words) or laughter contained within an utterance (i.e. during speech). The feature is a simple tally of the number of times people laughed within a window. We include it in some of our experiments for comparison purposes, though we do not trust it as general feature. (The participants in the ICSI meetings are far too familiar and at ease with each other to be representative with regards to laughter.)
Modeling ::: Non-Neural Models
In preliminary experiments, we compared several non-neural classifiers, including logistic regression (LR), random forests, linear support vector machines, and multinomial naive Bayes. Logistic regression gave the best results all around, and we used it exclusively for the results shown here, unless neural networks are used instead.
Modeling ::: Feed-Forward Neural Networks ::: Pooling Techniques
For BERT and openSMILE vector classification, we designed two different feed-forward neural network architectures. The sentiment-adapted embeddings described in Section SECREF3 produce one 1024-dimensional vector per utterance. Since all classification operates on time windows, we had to pool over all utterances falling withing a given window, taking care to truncate words falling outside the window. We tested four pooling methods: L2-norm, mean, max, and min, with L2-norm giving the best results.
As for the prosodic model, each vector extracted from openSMILE represents a 5 s interval. Since there was both a channel/speaker-axis and a time-axis, we needed to pool over both dimensions in order to have a single vector representing the prosodic features of a 60 s window. The second to last layer is the pooling layer, max-pooling across all the channels, and then mean-pooling over time. The output of the pooling layer is directly fed into the classifier.
Modeling ::: Feed-Forward Neural Networks ::: Hyperparameters
The hyperparameters of the neural networks (hidden layer number and sizes) were also tuned in preliminary experiments. Details are given in Section SECREF5.
Modeling ::: Model Fusion
Fig. FIGREF19 depicts the way features from multiple categories are combined. Speech activity and word features are fed directly into a final LR step. Acoustic-prosodic features are first combined in a feed-forward neural classifier, whose output log posteriors are in turn fed into the LR step for fusion. (When using only prosodic features, the ANN outputs are used directly.)
Experiments
We group experiments by the type of feaures they are based on: acoustic-prosodic, word-based, and speech activity, evaluating each group first by itself, and then in combination with others.
Experiments ::: Speech Feature Results
As discussed in Section SECREF3, a multitude of input features were investigated, with some being more discriminative. The most useful speech activity features were speaker overlap percentage, number of unique speakers, and number of turn switches, giving evaluation set UARs of 63.5%, 63.9%, and 66.6%, respectively. When combined the UAR improved to 68.0%, showing that these features are partly complementary.
Experiments ::: Word-Based Results
The TF-IDF model alone gave a UAR of 59.8%. A drastic increase in performance to 70.5% was found when using the BERT embeddings instead. Therefore we adopted embeddings for all further experiments based on word information.
Three different types of embeddings were investigated, i.e. sentiment-adapted embeddings at an utterance-level, unadapted embeddings at the utterance-level, and unadapted embeddings over time windows.
The adapted embeddings (on utterances) performed best, indicating that adaptation to sentiment task is useful for involvement classification. It is important to note, however, that the utterance-level embeddings are larger than the window-level embeddings. This is due to there being more utterances than windows in the meeting corpus.
The best neural architecture we found for these embeddings is a 5-layer neural network with sizes 1024-64-32-12-2. Other hyperparameters for this model are dropout rate = 0.4, learning rate = $10^{-7}$ and activation function “tanh”. The UAR on the evaluation set with just BERT embeddings as input is 65.2%.
Interestingly, the neural model was outperformed by a LR directly on the embedding vectors. Perhaps the neural network requires further fine-tuning, or the neural model is too prone to overfitting, given the small training corpus. In any case, we use LR on embeddings for all subsequent results.
Experiments ::: Acoustic-Prosodic Feature Results
Our prosodic model is a 5-layer ANN, as described in Section SECREF15. The architecture is: 988-512-128-16-Pool-2. The hyperparameters are: dropout rate = 0.4, learning rate = $10^{-7}$, activation = “tanh". The UAR on the evaluation set with just openSMILE features is 62.0%.
Experiments ::: Fusion Results and Discussion
Table TABREF24 gives the UAR for each feature subset individually, for all features combined, and for a combination in which one feature subset in turn is left out. The one-feature-set-at-time results suggest that prosody, speech activity and words are of increasing importance in that order. The leave-one-out analysis agrees that the words are the most important (largest drop in accuracy when removed), but on that criterion the prosodic features are more important than speech-activity. The combination of all features is 0.4% absolute better than any other subset, showing that all feature subsets are partly complementary.
Fig. FIGREF25 shows the same results in histogram form, but also add those with laughter information. Laughter count by itself is the strongest cue to involvement, as Laskowski BIBREF7 had found. However, even given the strong individual laughter feature, the other features add information, pushing the UAR from from 75.1% to 77.5%.
Conclusion
We studied detection of areas of high involvement, or “hot spots”, within meetings using the ICSI corpus. The features that yielded the best results are in line with our intuitions. Word embeddings, speech activity features such a number of turn changes, and prosodic features are all plausible indicators of high involvement. Furthermore, the feature sets are partly complementary and yield best results when combined using a simple logistic regression model. The combined model achieves 72.6% UAR, or 77.5% with laughter feature.
For future work, we would want to see a validation on an independent meeting collection, such as business meetings. Some features, in particular laughter, are bound not be as useful in this case. More data could also enable the training of joint models that perform an early fusion of the different feature types. Also, the present study still relied on human transcripts, and it would be important to know how much UAR suffers with a realistic amount of speech recognition error. Transcription errors are expected to boost the importance of the features types that do not rely on words.
Acknowledgments
We thank Britta Wrede, Elizabeth Shriberg and Kornel Laskowski for explanations concerning the details of the data. | unweighted average recall (UAR) metric |
c54de73b36ab86534d18a295f3711591ce9e1784 | c54de73b36ab86534d18a295f3711591ce9e1784_0 | Q: Is this approach compared to some baseline?
Text: Introduction and Prior Work
A definition of the meeting “hot spots” was first introduced in BIBREF2, where it was investigated whether human annotators could reliably identify regions in which participants are “highly involved in the discussion”. The motivation was that meetings generally have low information density and are tedious to review verbatim after the fact. An automatic system that could detect regions of high interest (as indicated by the involvement of the participants during the meeting) would thus be useful. Relatedly, automatic meeting summarization could also benefit from such information to give extra weight to hot spot regions in selecting or abstracting material for inclusion in the summary. Later work on the relationship between involvement and summarization BIBREF3 defined a different approach: hot spots are those regions chosen for inclusion in a summary by human annotators (“summarization hot spots”). In the present work we stick with the original “involvement hot spot” notion, and refer to such regions simply as “hot spots”, regardless of their possible role in summarization. We note that high involvement may be triggered both by a meeting's content (“what is being talked about”, and “what may be included in a textual summary”), as well as behavioral and social factors, such as a desire to participate, to stake out a position, or to oppose another participant. As related notion in dialog system research is “level of interest” BIBREF4.
The initial research on hot spots focused on the reliability of human annotators and correlations with certain low-level acoustic features, such as pitch BIBREF2. Also investigated were the correlation between hot spots and dialog acts BIBREF5 and hot spots and speaker overlap BIBREF6, without however conducting experiments in automatic hot spot prediction using machine learning techniques. Laskowski BIBREF7 redefined the hot spot annotations in terms of time-based windows over meetings, and investigated various classifier models to detect “hotness” (i.e., elevated involvement). However, that work focused on only two types of speech features: presence of laughter and the temporal patterns of speech activity across the various participants, both of which were found to be predictive of involvement.
For the related problem of level-of-interest prediction in dialog systems BIBREF8, it was found that content-based classification can also be effective, using both a discriminative TF-IDF model and lexical affect scores, as well as prosodic features. In line with the earlier hot spot research on interaction patterns and speaker overlap, turn-taking features were shown to be helpful for spotting summarization hot spots, in BIBREF3, and even more so than the human involvement annotations. The latter result confirms our intuition that summarization-worthiness and involvement are different notions of “hotness”.
In this paper, following Laskowski, we focus on the automatic prediction of the speakers' involvement in sliding-time windows/segments. We evaluate machine learning models based on a range of features that can be extracted automatically from audio recordings, either directly via signal processing or via the use of automatic transcriptions (ASR outputs). In particular, we investigate the relative contributions of three classes of information:
low-level acoustic-prosodic features, such as those commonly used in other paralinguistic tasks, such as sentiment analysis (extracted using openSMILE BIBREF0);
spoken word content, as encoded with a state-of-the-art lexical embedding approach such as BERT BIBREF1;
speaker interaction, based on speech activity over time and across different speakers.
We attach lower importance to laughter, even though it was found to be highly predictive of involvement in the ICSI corpus, partly because we believe it would not transfer well to more general types of (e.g., business) meetings, and partly because laughter detection is still a hard problem in itself BIBREF9. Generation of speaker-attributed meeting transcriptions, on the other hand, has seen remarkable progress BIBREF10 and could support the features we focus on here.
Data
The ICSI Meeting Corpus BIBREF11 is a collection of meeting recordings that has been thoroughly annotated, including annotations for involvement hot spots BIBREF12, linguistic utterance units, and word time boundaries based on forced alignment. The dataset is comprised of 75 meetings and about 70 hours of real-time audio duration, with 6 speakers per meeting on average. Most of the participants are well-acquainted and friendly with each other. Hot spots were originally annotated with 8 levels and degrees, ranging from `not hot' to `luke warm' to `hot +'. Every utterance was labeled with one of these discrete labels by a single annotator. Hightened involvement is rare, being marked on only 1% of utterances.
Due to the severe imbalance in the label distribution, Laskowski BIBREF13 proposed extending the involvement, or hotness, labels to sliding time windows. In our implementation (details below), this resulted in 21.7% of samples (windows) being labeled as “involved”.
We split the corpus into three subsets: training, development, and evaluation, keeping meetings intact. Table TABREF4 gives statistics of these partitions.
We were concerned with the relatively small number of meetings in the test sets, and repeated several of our experiments with a (jackknifing) cross-validation setup over the training set. The results obtained were very similar to those with the fixed train/test split results that we report here.
Data ::: Time Windowing
As stated above, the corpus was originally labeled for hot spots at the utterance level, where involvement was marked by either a `b' or a `b+' label. Training and test samples for our experiments correspond to 60 s-long sliding windows, with a 15 s step size. If a certain window, e.g., a segment spanning the times 15 s ...75 s, overlaps with any involved speech utterance, then we label that whole window as `hot'. Fig. FIGREF6 gives a visual representation.
Data ::: Metric
In spite of the windowing approach, the class distribution is still skewed, and an accuracy metric would reflect the particular class distribution in our data set. Therefore, we adopt the unweighted average recall (UAR) metric commonly used in emotion classification research. UAR is a reweighted accuracy where the samples of both classes are weighted equally in aggregate. UAR thus simulates a uniform class distribution. To match the objective, our classifiers are trained on appropriately weighted training data. Note that chance performance for UAR is by definition 50%, making results more comparable across different data sets.
Feature Description ::: Acoustic-Prosodic Features
Prosody encompasses pitch, energy, and durational features of speech. Prosody is thought to convey emphasis, sentiment, and emotion, all of which are presumably correlated with expressions of involvement. We used the openSMILE toolkit BIBREF0 to compute 988 features as defined by the emobase988 configuration file, operating on the close-talking meeting recordings. This feature set consists of low-level descriptors such as intensity, loudness, Mel-frequency cepstral coefficients, and pitch. For each low-level descriptor, functionals such as max/min value, mean, standard deviation, kurtosis, and skewness are computed. Finally, global mean and variance normalization are applied to each feature, using training set statistics. The feature vector thus captures acoustic-prosodic features aggregated over what are typically utterances. We tried extracting openSMILE features directly from 60 s windows, but found better results by extracting subwindows of 5 s, followed by pooling over the longer 60 s duration. We attribute this to the fact that emobase features are designed to operate on individual utterances, which have durations closer to 5 s than 60 s.
Feature Description ::: Word-Based Features ::: Bag of words with TF-IDF
Initially, we investigated a simple bag-of-words model including all unigrams, bigrams, and trigrams found in the training set. Occurrences of the top 10,000 n-grams were encoded to form a 10,000-dimensional vector, with values weighted according to TD-IDF. TF-IDF weights n-grams according to both their frequency (TF) and their salience (inverse document frequency, IDF) in the data, where each utterance was treated as a separate document. The resulting feature vectors are very sparse.
Feature Description ::: Word-Based Features ::: Embeddings
The ICSI dataset is too small to train a neural embedding model from scratch. Therefore, it is convenient to use the pre-trained BERT embedding architecture BIBREF1 to create an utterance-level embedding vector for each region of interest. Having been trained on a large text corpus, the resulting embeddings encode semantic similarities among utterances, and would enable generalization from word patterns seen in the ICSI training data to those that have not been observed on that limited corpus.
We had previously also created an adapted version of the BERT model, tuned to to perform utterance-level sentiment classification, on a separate dataset BIBREF14. As proposed in BIBREF1, we fine-tuned all layers of the pre-trained BERT model by adding a single fully-connected layer and classifying using only the embedding corresponding to the classification ([CLS]) token prepended to each utterance. The difference in UAR between the hot spot classifiers using the pre-trained embeddings and those using the sentiment-adapted embeddings is small. Since the classifier using embeddings extracted by the sentiment-adapted model yielded slightly better performance, we report all results using these as input.
To obtain a single embedding for each 60 s window, we experimented with various approaches of pooling the token and utterance-level embeddings. For our first approach, we ignored the ground-truth utterance segmentation and speaker information. We merged all words spoken within a particular window into a single contiguous span. Following BIBREF1, we added the appropriate classification and separation tokens to the text and selected the embedding corresponding to the [CLS] token as the window-level embedding. Our second approach used the ground-truth segmentation of the dialogue. Each speaker turn was independently modeled, and utterance-level embeddings were extracted using the representation corresponding to the [CLS] token. Utterances that cross window boundaries are truncated using the word timestamps, so only words spoken within the given time window are considered. For all reported experiments, we use L2-norm pooling to form the window-level embeddings for the final classifier, as this performed better than either mean or max pooling.
Feature Description ::: Speaker Activity Features
These features were a compilation of three different feature types:
Speaker overlap percentages: Based on the available word-level times, we computed a 6-dimensional feature vector, where the $i$th index indicates the fraction of time that $i$ or more speakers are talking within a given window. This can be expressed by $\frac{t_i}{60}$ with $t_i$ indicating the time in seconds that $i$ or more people were speaking at the same time.
Unique speaker count: Counts the unique speakers within a window, as a useful metric to track the diversity of participation within a certain window.
Turn switch count: Counts the number of times a speaker begins talking within a window. This is a similar metric to the number of utterances. However, unlike utterance count, turn switches can be computed entirely from speech activity, without requiring a linguistic segmentation.
Feature Description ::: Laughter Count
Laskowski found that laughter is highly predictive of involvement in the ICSI data. Laughter is annotated on an utterance level and falls into two categories: laughter solely on its own (no words) or laughter contained within an utterance (i.e. during speech). The feature is a simple tally of the number of times people laughed within a window. We include it in some of our experiments for comparison purposes, though we do not trust it as general feature. (The participants in the ICSI meetings are far too familiar and at ease with each other to be representative with regards to laughter.)
Modeling ::: Non-Neural Models
In preliminary experiments, we compared several non-neural classifiers, including logistic regression (LR), random forests, linear support vector machines, and multinomial naive Bayes. Logistic regression gave the best results all around, and we used it exclusively for the results shown here, unless neural networks are used instead.
Modeling ::: Feed-Forward Neural Networks ::: Pooling Techniques
For BERT and openSMILE vector classification, we designed two different feed-forward neural network architectures. The sentiment-adapted embeddings described in Section SECREF3 produce one 1024-dimensional vector per utterance. Since all classification operates on time windows, we had to pool over all utterances falling withing a given window, taking care to truncate words falling outside the window. We tested four pooling methods: L2-norm, mean, max, and min, with L2-norm giving the best results.
As for the prosodic model, each vector extracted from openSMILE represents a 5 s interval. Since there was both a channel/speaker-axis and a time-axis, we needed to pool over both dimensions in order to have a single vector representing the prosodic features of a 60 s window. The second to last layer is the pooling layer, max-pooling across all the channels, and then mean-pooling over time. The output of the pooling layer is directly fed into the classifier.
Modeling ::: Feed-Forward Neural Networks ::: Hyperparameters
The hyperparameters of the neural networks (hidden layer number and sizes) were also tuned in preliminary experiments. Details are given in Section SECREF5.
Modeling ::: Model Fusion
Fig. FIGREF19 depicts the way features from multiple categories are combined. Speech activity and word features are fed directly into a final LR step. Acoustic-prosodic features are first combined in a feed-forward neural classifier, whose output log posteriors are in turn fed into the LR step for fusion. (When using only prosodic features, the ANN outputs are used directly.)
Experiments
We group experiments by the type of feaures they are based on: acoustic-prosodic, word-based, and speech activity, evaluating each group first by itself, and then in combination with others.
Experiments ::: Speech Feature Results
As discussed in Section SECREF3, a multitude of input features were investigated, with some being more discriminative. The most useful speech activity features were speaker overlap percentage, number of unique speakers, and number of turn switches, giving evaluation set UARs of 63.5%, 63.9%, and 66.6%, respectively. When combined the UAR improved to 68.0%, showing that these features are partly complementary.
Experiments ::: Word-Based Results
The TF-IDF model alone gave a UAR of 59.8%. A drastic increase in performance to 70.5% was found when using the BERT embeddings instead. Therefore we adopted embeddings for all further experiments based on word information.
Three different types of embeddings were investigated, i.e. sentiment-adapted embeddings at an utterance-level, unadapted embeddings at the utterance-level, and unadapted embeddings over time windows.
The adapted embeddings (on utterances) performed best, indicating that adaptation to sentiment task is useful for involvement classification. It is important to note, however, that the utterance-level embeddings are larger than the window-level embeddings. This is due to there being more utterances than windows in the meeting corpus.
The best neural architecture we found for these embeddings is a 5-layer neural network with sizes 1024-64-32-12-2. Other hyperparameters for this model are dropout rate = 0.4, learning rate = $10^{-7}$ and activation function “tanh”. The UAR on the evaluation set with just BERT embeddings as input is 65.2%.
Interestingly, the neural model was outperformed by a LR directly on the embedding vectors. Perhaps the neural network requires further fine-tuning, or the neural model is too prone to overfitting, given the small training corpus. In any case, we use LR on embeddings for all subsequent results.
Experiments ::: Acoustic-Prosodic Feature Results
Our prosodic model is a 5-layer ANN, as described in Section SECREF15. The architecture is: 988-512-128-16-Pool-2. The hyperparameters are: dropout rate = 0.4, learning rate = $10^{-7}$, activation = “tanh". The UAR on the evaluation set with just openSMILE features is 62.0%.
Experiments ::: Fusion Results and Discussion
Table TABREF24 gives the UAR for each feature subset individually, for all features combined, and for a combination in which one feature subset in turn is left out. The one-feature-set-at-time results suggest that prosody, speech activity and words are of increasing importance in that order. The leave-one-out analysis agrees that the words are the most important (largest drop in accuracy when removed), but on that criterion the prosodic features are more important than speech-activity. The combination of all features is 0.4% absolute better than any other subset, showing that all feature subsets are partly complementary.
Fig. FIGREF25 shows the same results in histogram form, but also add those with laughter information. Laughter count by itself is the strongest cue to involvement, as Laskowski BIBREF7 had found. However, even given the strong individual laughter feature, the other features add information, pushing the UAR from from 75.1% to 77.5%.
Conclusion
We studied detection of areas of high involvement, or “hot spots”, within meetings using the ICSI corpus. The features that yielded the best results are in line with our intuitions. Word embeddings, speech activity features such a number of turn changes, and prosodic features are all plausible indicators of high involvement. Furthermore, the feature sets are partly complementary and yield best results when combined using a simple logistic regression model. The combined model achieves 72.6% UAR, or 77.5% with laughter feature.
For future work, we would want to see a validation on an independent meeting collection, such as business meetings. Some features, in particular laughter, are bound not be as useful in this case. More data could also enable the training of joint models that perform an early fusion of the different feature types. Also, the present study still relied on human transcripts, and it would be important to know how much UAR suffers with a realistic amount of speech recognition error. Transcription errors are expected to boost the importance of the features types that do not rely on words.
Acknowledgments
We thank Britta Wrede, Elizabeth Shriberg and Kornel Laskowski for explanations concerning the details of the data. | No |
fdd9dea06550a2fd0df7a1e6a5109facf3601d76 | fdd9dea06550a2fd0df7a1e6a5109facf3601d76_0 | Q: How big is ICSI meeting corpus?
Text: Introduction and Prior Work
A definition of the meeting “hot spots” was first introduced in BIBREF2, where it was investigated whether human annotators could reliably identify regions in which participants are “highly involved in the discussion”. The motivation was that meetings generally have low information density and are tedious to review verbatim after the fact. An automatic system that could detect regions of high interest (as indicated by the involvement of the participants during the meeting) would thus be useful. Relatedly, automatic meeting summarization could also benefit from such information to give extra weight to hot spot regions in selecting or abstracting material for inclusion in the summary. Later work on the relationship between involvement and summarization BIBREF3 defined a different approach: hot spots are those regions chosen for inclusion in a summary by human annotators (“summarization hot spots”). In the present work we stick with the original “involvement hot spot” notion, and refer to such regions simply as “hot spots”, regardless of their possible role in summarization. We note that high involvement may be triggered both by a meeting's content (“what is being talked about”, and “what may be included in a textual summary”), as well as behavioral and social factors, such as a desire to participate, to stake out a position, or to oppose another participant. As related notion in dialog system research is “level of interest” BIBREF4.
The initial research on hot spots focused on the reliability of human annotators and correlations with certain low-level acoustic features, such as pitch BIBREF2. Also investigated were the correlation between hot spots and dialog acts BIBREF5 and hot spots and speaker overlap BIBREF6, without however conducting experiments in automatic hot spot prediction using machine learning techniques. Laskowski BIBREF7 redefined the hot spot annotations in terms of time-based windows over meetings, and investigated various classifier models to detect “hotness” (i.e., elevated involvement). However, that work focused on only two types of speech features: presence of laughter and the temporal patterns of speech activity across the various participants, both of which were found to be predictive of involvement.
For the related problem of level-of-interest prediction in dialog systems BIBREF8, it was found that content-based classification can also be effective, using both a discriminative TF-IDF model and lexical affect scores, as well as prosodic features. In line with the earlier hot spot research on interaction patterns and speaker overlap, turn-taking features were shown to be helpful for spotting summarization hot spots, in BIBREF3, and even more so than the human involvement annotations. The latter result confirms our intuition that summarization-worthiness and involvement are different notions of “hotness”.
In this paper, following Laskowski, we focus on the automatic prediction of the speakers' involvement in sliding-time windows/segments. We evaluate machine learning models based on a range of features that can be extracted automatically from audio recordings, either directly via signal processing or via the use of automatic transcriptions (ASR outputs). In particular, we investigate the relative contributions of three classes of information:
low-level acoustic-prosodic features, such as those commonly used in other paralinguistic tasks, such as sentiment analysis (extracted using openSMILE BIBREF0);
spoken word content, as encoded with a state-of-the-art lexical embedding approach such as BERT BIBREF1;
speaker interaction, based on speech activity over time and across different speakers.
We attach lower importance to laughter, even though it was found to be highly predictive of involvement in the ICSI corpus, partly because we believe it would not transfer well to more general types of (e.g., business) meetings, and partly because laughter detection is still a hard problem in itself BIBREF9. Generation of speaker-attributed meeting transcriptions, on the other hand, has seen remarkable progress BIBREF10 and could support the features we focus on here.
Data
The ICSI Meeting Corpus BIBREF11 is a collection of meeting recordings that has been thoroughly annotated, including annotations for involvement hot spots BIBREF12, linguistic utterance units, and word time boundaries based on forced alignment. The dataset is comprised of 75 meetings and about 70 hours of real-time audio duration, with 6 speakers per meeting on average. Most of the participants are well-acquainted and friendly with each other. Hot spots were originally annotated with 8 levels and degrees, ranging from `not hot' to `luke warm' to `hot +'. Every utterance was labeled with one of these discrete labels by a single annotator. Hightened involvement is rare, being marked on only 1% of utterances.
Due to the severe imbalance in the label distribution, Laskowski BIBREF13 proposed extending the involvement, or hotness, labels to sliding time windows. In our implementation (details below), this resulted in 21.7% of samples (windows) being labeled as “involved”.
We split the corpus into three subsets: training, development, and evaluation, keeping meetings intact. Table TABREF4 gives statistics of these partitions.
We were concerned with the relatively small number of meetings in the test sets, and repeated several of our experiments with a (jackknifing) cross-validation setup over the training set. The results obtained were very similar to those with the fixed train/test split results that we report here.
Data ::: Time Windowing
As stated above, the corpus was originally labeled for hot spots at the utterance level, where involvement was marked by either a `b' or a `b+' label. Training and test samples for our experiments correspond to 60 s-long sliding windows, with a 15 s step size. If a certain window, e.g., a segment spanning the times 15 s ...75 s, overlaps with any involved speech utterance, then we label that whole window as `hot'. Fig. FIGREF6 gives a visual representation.
Data ::: Metric
In spite of the windowing approach, the class distribution is still skewed, and an accuracy metric would reflect the particular class distribution in our data set. Therefore, we adopt the unweighted average recall (UAR) metric commonly used in emotion classification research. UAR is a reweighted accuracy where the samples of both classes are weighted equally in aggregate. UAR thus simulates a uniform class distribution. To match the objective, our classifiers are trained on appropriately weighted training data. Note that chance performance for UAR is by definition 50%, making results more comparable across different data sets.
Feature Description ::: Acoustic-Prosodic Features
Prosody encompasses pitch, energy, and durational features of speech. Prosody is thought to convey emphasis, sentiment, and emotion, all of which are presumably correlated with expressions of involvement. We used the openSMILE toolkit BIBREF0 to compute 988 features as defined by the emobase988 configuration file, operating on the close-talking meeting recordings. This feature set consists of low-level descriptors such as intensity, loudness, Mel-frequency cepstral coefficients, and pitch. For each low-level descriptor, functionals such as max/min value, mean, standard deviation, kurtosis, and skewness are computed. Finally, global mean and variance normalization are applied to each feature, using training set statistics. The feature vector thus captures acoustic-prosodic features aggregated over what are typically utterances. We tried extracting openSMILE features directly from 60 s windows, but found better results by extracting subwindows of 5 s, followed by pooling over the longer 60 s duration. We attribute this to the fact that emobase features are designed to operate on individual utterances, which have durations closer to 5 s than 60 s.
Feature Description ::: Word-Based Features ::: Bag of words with TF-IDF
Initially, we investigated a simple bag-of-words model including all unigrams, bigrams, and trigrams found in the training set. Occurrences of the top 10,000 n-grams were encoded to form a 10,000-dimensional vector, with values weighted according to TD-IDF. TF-IDF weights n-grams according to both their frequency (TF) and their salience (inverse document frequency, IDF) in the data, where each utterance was treated as a separate document. The resulting feature vectors are very sparse.
Feature Description ::: Word-Based Features ::: Embeddings
The ICSI dataset is too small to train a neural embedding model from scratch. Therefore, it is convenient to use the pre-trained BERT embedding architecture BIBREF1 to create an utterance-level embedding vector for each region of interest. Having been trained on a large text corpus, the resulting embeddings encode semantic similarities among utterances, and would enable generalization from word patterns seen in the ICSI training data to those that have not been observed on that limited corpus.
We had previously also created an adapted version of the BERT model, tuned to to perform utterance-level sentiment classification, on a separate dataset BIBREF14. As proposed in BIBREF1, we fine-tuned all layers of the pre-trained BERT model by adding a single fully-connected layer and classifying using only the embedding corresponding to the classification ([CLS]) token prepended to each utterance. The difference in UAR between the hot spot classifiers using the pre-trained embeddings and those using the sentiment-adapted embeddings is small. Since the classifier using embeddings extracted by the sentiment-adapted model yielded slightly better performance, we report all results using these as input.
To obtain a single embedding for each 60 s window, we experimented with various approaches of pooling the token and utterance-level embeddings. For our first approach, we ignored the ground-truth utterance segmentation and speaker information. We merged all words spoken within a particular window into a single contiguous span. Following BIBREF1, we added the appropriate classification and separation tokens to the text and selected the embedding corresponding to the [CLS] token as the window-level embedding. Our second approach used the ground-truth segmentation of the dialogue. Each speaker turn was independently modeled, and utterance-level embeddings were extracted using the representation corresponding to the [CLS] token. Utterances that cross window boundaries are truncated using the word timestamps, so only words spoken within the given time window are considered. For all reported experiments, we use L2-norm pooling to form the window-level embeddings for the final classifier, as this performed better than either mean or max pooling.
Feature Description ::: Speaker Activity Features
These features were a compilation of three different feature types:
Speaker overlap percentages: Based on the available word-level times, we computed a 6-dimensional feature vector, where the $i$th index indicates the fraction of time that $i$ or more speakers are talking within a given window. This can be expressed by $\frac{t_i}{60}$ with $t_i$ indicating the time in seconds that $i$ or more people were speaking at the same time.
Unique speaker count: Counts the unique speakers within a window, as a useful metric to track the diversity of participation within a certain window.
Turn switch count: Counts the number of times a speaker begins talking within a window. This is a similar metric to the number of utterances. However, unlike utterance count, turn switches can be computed entirely from speech activity, without requiring a linguistic segmentation.
Feature Description ::: Laughter Count
Laskowski found that laughter is highly predictive of involvement in the ICSI data. Laughter is annotated on an utterance level and falls into two categories: laughter solely on its own (no words) or laughter contained within an utterance (i.e. during speech). The feature is a simple tally of the number of times people laughed within a window. We include it in some of our experiments for comparison purposes, though we do not trust it as general feature. (The participants in the ICSI meetings are far too familiar and at ease with each other to be representative with regards to laughter.)
Modeling ::: Non-Neural Models
In preliminary experiments, we compared several non-neural classifiers, including logistic regression (LR), random forests, linear support vector machines, and multinomial naive Bayes. Logistic regression gave the best results all around, and we used it exclusively for the results shown here, unless neural networks are used instead.
Modeling ::: Feed-Forward Neural Networks ::: Pooling Techniques
For BERT and openSMILE vector classification, we designed two different feed-forward neural network architectures. The sentiment-adapted embeddings described in Section SECREF3 produce one 1024-dimensional vector per utterance. Since all classification operates on time windows, we had to pool over all utterances falling withing a given window, taking care to truncate words falling outside the window. We tested four pooling methods: L2-norm, mean, max, and min, with L2-norm giving the best results.
As for the prosodic model, each vector extracted from openSMILE represents a 5 s interval. Since there was both a channel/speaker-axis and a time-axis, we needed to pool over both dimensions in order to have a single vector representing the prosodic features of a 60 s window. The second to last layer is the pooling layer, max-pooling across all the channels, and then mean-pooling over time. The output of the pooling layer is directly fed into the classifier.
Modeling ::: Feed-Forward Neural Networks ::: Hyperparameters
The hyperparameters of the neural networks (hidden layer number and sizes) were also tuned in preliminary experiments. Details are given in Section SECREF5.
Modeling ::: Model Fusion
Fig. FIGREF19 depicts the way features from multiple categories are combined. Speech activity and word features are fed directly into a final LR step. Acoustic-prosodic features are first combined in a feed-forward neural classifier, whose output log posteriors are in turn fed into the LR step for fusion. (When using only prosodic features, the ANN outputs are used directly.)
Experiments
We group experiments by the type of feaures they are based on: acoustic-prosodic, word-based, and speech activity, evaluating each group first by itself, and then in combination with others.
Experiments ::: Speech Feature Results
As discussed in Section SECREF3, a multitude of input features were investigated, with some being more discriminative. The most useful speech activity features were speaker overlap percentage, number of unique speakers, and number of turn switches, giving evaluation set UARs of 63.5%, 63.9%, and 66.6%, respectively. When combined the UAR improved to 68.0%, showing that these features are partly complementary.
Experiments ::: Word-Based Results
The TF-IDF model alone gave a UAR of 59.8%. A drastic increase in performance to 70.5% was found when using the BERT embeddings instead. Therefore we adopted embeddings for all further experiments based on word information.
Three different types of embeddings were investigated, i.e. sentiment-adapted embeddings at an utterance-level, unadapted embeddings at the utterance-level, and unadapted embeddings over time windows.
The adapted embeddings (on utterances) performed best, indicating that adaptation to sentiment task is useful for involvement classification. It is important to note, however, that the utterance-level embeddings are larger than the window-level embeddings. This is due to there being more utterances than windows in the meeting corpus.
The best neural architecture we found for these embeddings is a 5-layer neural network with sizes 1024-64-32-12-2. Other hyperparameters for this model are dropout rate = 0.4, learning rate = $10^{-7}$ and activation function “tanh”. The UAR on the evaluation set with just BERT embeddings as input is 65.2%.
Interestingly, the neural model was outperformed by a LR directly on the embedding vectors. Perhaps the neural network requires further fine-tuning, or the neural model is too prone to overfitting, given the small training corpus. In any case, we use LR on embeddings for all subsequent results.
Experiments ::: Acoustic-Prosodic Feature Results
Our prosodic model is a 5-layer ANN, as described in Section SECREF15. The architecture is: 988-512-128-16-Pool-2. The hyperparameters are: dropout rate = 0.4, learning rate = $10^{-7}$, activation = “tanh". The UAR on the evaluation set with just openSMILE features is 62.0%.
Experiments ::: Fusion Results and Discussion
Table TABREF24 gives the UAR for each feature subset individually, for all features combined, and for a combination in which one feature subset in turn is left out. The one-feature-set-at-time results suggest that prosody, speech activity and words are of increasing importance in that order. The leave-one-out analysis agrees that the words are the most important (largest drop in accuracy when removed), but on that criterion the prosodic features are more important than speech-activity. The combination of all features is 0.4% absolute better than any other subset, showing that all feature subsets are partly complementary.
Fig. FIGREF25 shows the same results in histogram form, but also add those with laughter information. Laughter count by itself is the strongest cue to involvement, as Laskowski BIBREF7 had found. However, even given the strong individual laughter feature, the other features add information, pushing the UAR from from 75.1% to 77.5%.
Conclusion
We studied detection of areas of high involvement, or “hot spots”, within meetings using the ICSI corpus. The features that yielded the best results are in line with our intuitions. Word embeddings, speech activity features such a number of turn changes, and prosodic features are all plausible indicators of high involvement. Furthermore, the feature sets are partly complementary and yield best results when combined using a simple logistic regression model. The combined model achieves 72.6% UAR, or 77.5% with laughter feature.
For future work, we would want to see a validation on an independent meeting collection, such as business meetings. Some features, in particular laughter, are bound not be as useful in this case. More data could also enable the training of joint models that perform an early fusion of the different feature types. Also, the present study still relied on human transcripts, and it would be important to know how much UAR suffers with a realistic amount of speech recognition error. Transcription errors are expected to boost the importance of the features types that do not rely on words.
Acknowledgments
We thank Britta Wrede, Elizabeth Shriberg and Kornel Laskowski for explanations concerning the details of the data. | 75 meetings and about 70 hours of real-time audio duration |